Artificial Intelligence Unit II: Solving Problems by Searching

Views:
 
Category: Education
     
 

Presentation Description

Artificial Intelligence: Unit II Problem Solving by Searching

Comments

Presentation Transcript

Solving Problems by Searching:

Unit II Solving Problems by Searching

Outline:

Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms

Introduction:

Introduction Goal-based agents can succeed by considering future actions and desirability of their outcomes. Problem solving agent is a goal-based agent that decides what to do by finding sequences of actions that lead to desirable states

Problem solving:

Problem solving We want: To automatically solve a problem We need: A representation of the problem Algorithms that use some strategy to solve the problem defined in that representation

Problem representation:

Problem representation General: State space : a problem is divided into a set of resolution steps from the initial state to the goal state Reduction to sub-problems : a problem is arranged into a hierarchy of sub-problems

States:

States A problem is defined by its elements and their relations. In each instant of the resolution of a problem, those elements have specific descriptors (How to select them?) and relations. A state is a representation of those elements in a given moment. Two special states are defined: Initial state (starting point) Final state (goal state)

State modification: successor function:

State modification: successor function A successor function is needed to move between different states. A successor function is a description of possible actions, a set of operators. It is a transformation function on a state representation, which convert it into another state. The successor function defines a relation of accessibility among states. Representation of the successor function: Conditions of applicability Transformation function

State space:

State space The state space is the set of all states reachable from the initial state. It forms a graph (or map) in which the nodes are states and the arcs between nodes are actions. A path in the state space is a sequence of states connected by a sequence of actions. The solution of the problem is part of the map formed by the state space.

Problem solution:

Problem solution A solution in the state space is a path from the initial state to a goal state or. Path/solution cost : function that assigns a numeric cost to each path, the cost of applying the operators to the states. Solution quality is measured by the path cost function, and an optimal solution has the lowest path cost among all solutions. Solutions: any, an optimal one, all. Cost is important depending on the problem and the type of solution sought.

Problem description:

Problem description Components: State space (explicitly or implicitly defined) Initial state Goal state (or the conditions it has to fulfill) Available actions (operators to change state) Restrictions (e.g., cost) Elements of the domain which are relevant to the problem (e.g., incomplete knowledge of the starting point) Type of solution: Sequence of operators or goal state Any, an optimal one (cost definition needed), all

Problem solving agents:

Problem solving agents Intelligent agents are supposed to maximize their performance measure. This can be simplified if the agent can adopt a goal and aim at satisfying it. Goal formulation, based on the current situation and the agent’s performance measure, is the first step in problem solving Goal is a set of states. The agent’s task is to find out which sequence of actions will get it to a goal state Problem formulation is the process of deciding what sorts of actions and states to consider, given a goal

Contd..:

Contd.. An agent with several immediate options of unknown value can decide what to do by first examining different possible sequences of actions that lead to states of known value, and then choosing the best sequence, Looking for such a sequence is called search, A search algorithm takes a problem as input and returns a solution in the form of action sequence, a solution is found the actions it recommends can be carried out – execution phase.

Contd..:

Contd.. “formulate, search, execute” design for the agent, After formulating a goal and a problem to solve the agent calls a search procedure to solve it, It then uses the solution to guide its actions, doing whatever the solution recommends as the next thing to do (typically the first action in the sequence), Then removing that step from the sequence, Once the solution has been executed, the agent will formulate a new goal.

Problem-solving agents:

Problem-solving agents

Problem types:

Problem types Deterministic, fully observable  single-state problem Agent knows exactly which state it will be in; Non-observable  sensorless problem (conformant problem) Agent may have no idea where it is; Nondeterministic and/or partially observable  contingency problem percepts provide new information about current state Unknown state space  exploration problem

Example: Romania:

Example: Romania On holiday in Romania; currently in Arad. Flight leaves tomorrow from Bucharest Formulate goal : be in Bucharest Formulate problem : states : various cities actions : drive between cities Find solution : sequence of cities, e.g., Arad, Sibiu, Fagaras , Bucharest

Example: Romania:

Example: Romania

Well-defined problems and solutions:

Well-defined problems and solutions A problem can be defined formally by four components Initial state that the agent starts in – e.g. In(Arad) A description of the possible actions available to the agent – Successor function – returns a set of <action, successor> pairs – e.g. {<Go(Sibiu),In(Sibiu)>, <Go(Timisoara),In(Timisoara)>, <Go(Zerind), In(Zerind)>} – Initial state and the successor function define the state space ( a graph in which the nodes are states and the arcs between nodes are actions). A path in state space is a sequence of states connected by a sequence of actions Goal test determines whether a given state is a goal state – e.g.{In(Bucharest)} Path cost function that assigns a numeric cost to each path. The cost of a path can be described as the some of the costs of the individual actions along the path – step cost – e.g. Time to go Bucharest

Single-state problem formulation:

Single-state problem formulation A problem is defined by four items: initial state e.g., "at Arad“ actions or successor function S(x) = set of action–state pairs e.g., S(Arad) = { <Arad  Zerind , Zerind >, … } goal test , can be explicit , e.g., x = "at Bucharest“ path cost (additive) e.g., sum of distances, number of actions executed, etc. step cost , assumed to be ≥ 0 A solution is a sequence of actions leading from the initial state to a goal state

Example: The 8-puzzle:

Example: The 8-puzzle states? actions? goal test? path cost?

Example: The 8-puzzle:

Example: The 8-puzzle states? locations of tiles actions? move blank left, right, up, down goal test? = goal state (given) path cost? 1 per move

Tree search algorithms:

Tree search algorithms Basic idea: offline, simulated exploration of state space by generating successors of already-explored states (a.k.a.~ expanding states)

Tree search example:

Tree search example

Tree search example:

Tree search example

Tree search example:

Tree search example

Search strategies:

Search strategies A search strategy is defined by picking the order of node expansion Strategies are evaluated along the following dimensions: completeness : does it always find a solution if one exists? time complexity : number of nodes generated space complexity : maximum number of nodes in memory optimality : does it always find a least-cost solution? Time and space complexity are measured in terms of b: maximum branching factor of the search tree d: depth of the least-cost solution m : maximum depth of the state space (may be ∞ )

Uninformed search strategies:

Uninformed search strategies Uninformed search strategies use only the information available in the problem definition Breadth-first search Uniform-cost search Depth-first search Depth-limited search Iterative deepening search

Breadth-first search:

Breadth-first search The root node is expanded first, then all the successors of the root node, and their successors and so on In general, all the nodes are expanded at a given depth in the search tree before any nodes at the next level are expanded Expand shallowest unexpanded node Implementation: – fringe is a FIFO queue, – the nodes that are visited first will be expanded first –All newly generated successors will be put at the end of the queue – Shallow nodes are expanded before deeper nodes

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Initial state Goal state Fringe: A (FIFO) Successors: B,C,D Visited: Breadth-first Search The fringe is the data structure we use to store all of the nodes that have been generated

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: B,C,D (FIFO) Successors: E,F Visited: A Next node Breadth-first Search

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: C,D,E,F (FIFO) Successors: G,H Visited: A , B Next node Breadth-first Search

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: D,E,F,G,H (FIFO) Successors: I,J Visited: A , B , C Next node Breadth-first Search

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: E,F,G,H,I,J (FIFO) Successors: K,L Visited: A , B , C , D Next node Breadth-first Search

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: F,G,H,I,J,K,L (FIFO) Successors: M Visited: A , B , C , D , E Next node Breadth-first Search

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: G,H,I,J,K,L,M (FIFO) Successors: N Visited: A , B , C , D , E , F Next node Breadth-first Search

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: H,I,J,K,L,M,N (FIFO) Successors: O Visited: A , B , C , D , E , F , G Next node Breadth-first Search

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: I,J,K,L,M,N,O (FIFO) Successors: P,Q Visited: A , B , C , D , E , F , G , H Next node Breadth-first Search

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: J,K,L,M,N,O,P,Q (FIFO) Successors: R Next node Visited: A , B , C , D , E , F , G , H , I Breadth-first Search

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: K,L,M,N,O,P,Q,R (FIFO) Successors: S Next node Visited: A , B , C , D , E , F , G , H , I , J Breadth-first Search

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: L,M,N,O,P,Q,R,S (FIFO) Successors: T Next node Visited: A , B , C , D , E , F , G , H , I , J , K Breadth-first Search

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: M,N,O,P,Q,R,S,T (FIFO) Successors: Next node Visited: A , B , C , D , E , F , G , H , I , J , K , L Breadth-first Search

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: N,O,P,Q,R,S,T (FIFO) Successors: Next node Visited: A , B , C , D , E , F , G , H , I , J , K , L , M Goal state achieved Breadth-first Search

Breadth-first Search:

Breadth-first Search Algorithm BREADTH : Breadth first search in state space Let fringe be a list containing the initial state Loop if fringe is empty return failure Node  remove-first (fringe) if Node is a goal then return the path from initial state to Node else generate all successors of Node, and (merge the newly generated nodes into fringe) add generated nodes to the back of fringe End Loop

Properties of breadth-first search:

Properties of breadth-first search Complete? Yes (if b is finite) Time? 1+b+b 2 +b 3 +… + b d + b(b d -1 ) = O(b d+1 ) Space? O(b d+1 ) (keeps every node in memory) Optimal? Yes (if cost = 1 per step) Space is the bigger problem (more than time)

Uniform-cost search (UCS):

Uniform-cost search (UCS) Uniform cost search is a search algorithm used to traverse, and find the shortest path in weighted trees and graphs. Uniform Cost Search or UCS begins at a root node and will continually expand nodes, taking the node with the smallest total cost from the root until it reaches the goal state. Uniform cost search doesn't care about how many steps a path has, only the total cost of the path. UCS with all path costs equal to one is identical to breadth first search.

PowerPoint Presentation:

Uniform Cost Search S A B C D 1 5 15 5 5 10 Similar to BFS except that it sorts (ascending order) the nodes in the fringe according to the cost of the node. where cost is the path cost .

PowerPoint Presentation:

S A B C D 1 5 15 5 5 10 Uniform Cost Search Fringe = [S 0 ] Next Node=Head of Fringe=S, S is not goal Successor(S)={C,B,A}=expand(S) but sort them according to path cost. Updated Fringe=[A 1 ,B 5 ,C 15 ] Queue Queue

PowerPoint Presentation:

S A B C 1 5 15 D 5 5 10 Uniform Cost Search Fringe = [A 1 ,B 5 ,C 15 ] Next Node=Head of Fringe=A, A is not goal Successor(A)={D}=expand(A) Sort the queue according to path cost. Updated Fringe=[B 5 ,D 11 ,C 15 ]

PowerPoint Presentation:

D S A B C 1 5 15 5 5 10 Uniform Cost Search Fringe = [B 5 ,D 11 ,C 15 ] Next Node=Head of Fringe=B, B is not goal Successor(B)={D}=expand(B) Sort the queue according to path cost. Updated Fringe=[D 10 ,D 11 ,C 15 ]

PowerPoint Presentation:

Uniform Cost Search Fringe = [ D 10 ,D 11 ,C 15 ] Next Node=Head of Fringe=D, D is a GOAL (cost 10 = 5+5) S BD D S A B C 1 5 15 5 5 10 Always finds the cheapest solution

Uniform Cost Search:

Uniform Cost Search A B C D E F G H

Uniform Cost Search:

Uniform Cost Search A B C D E H F G

Uniform Cost Search:

Uniform Cost Search

Uniform-cost search:

Uniform-cost search Expand least-cost unexpanded node Implementation : fringe = queue ordered by path cost Equivalent to breadth-first if step costs all equal Complete? Yes, if step cost ≥ ε Time? # of nodes with g ≤ cost of optimal solution, O(b 1 + C*/ ε ) where C * is the cost of the optimal solution Space? # of nodes with g ≤ cost of optimal solution, O(b 1 + C*/ ε ) Optimal? Yes – nodes expanded in increasing order of g(n)

Depth First Search - Method:

Depth First Search - Method Expand Root Node First Explore one branch of the tree before exploring another branch If a leaf node do not represent a goal state, search backtracks up to the next highest node that has an unexplored path

DFS:

DFS Depth-first search ( DFS ) is an algorithm for traversing or searching a tree , tree structure , or graph . One starts at the root (selecting some node as the root in the graph case) and explores as far as possible along each branch before backtracking .

DFS:

DFS DFS is an uninformed search that starts from root node of the search tree and going deeper and deeper until a goal node is found, or until it hits a node that has no children. Then the search backtracks , returning to the most recent node it hasn't finished exploring.

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Initial state Goal state Fringe: A (LIFO) Successors: B,C,D Visited: Depth-First Search

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: B,C,D (LIFO) Successors: E,F Visited: A Depth-First Search

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: E,F,C,D (LIFO) Successors: K,L Visited: A , B Depth-First Search

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: K,L,F,C,D (LIFO) Successors: S Visited: A , B , E Depth-First Search

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: S,L,F,C,D (LIFO) Successors: Visited: A , B , E , K Depth-First Search

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: L,F,C,D (LIFO) Successors: T Backtracking Visited: A , B , E , K , S Depth-First Search

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: T,F,C,D (LIFO) Successors: Visited: A , B , E , K , S , L Depth-First Search

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: F,C,D (LIFO) Successors: M Depth-First Search Backtracking Visited: A , B , E , K , S , L , T

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: M,C,D (LIFO) Successors: Visited: A , B , E , K , S , L , T , F Depth-First Search

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: C,D (LIFO) Successors: G,H Backtracking Visited: A , B , E , K , S , L , T , F , M Depth-First Search

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: G,H,D (LIFO) Successors: N Visited: A , B , E , K , S , L , T , F , M , C Depth-First Search

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: N,H,D (LIFO) Successors: Finished search U Goal state achieved Visited: A , B , E , K , S , L , T , F , M , C , G Depth-First Search

PowerPoint Presentation:

Depth First Search Let fringe be a list containing the initial state Loop if fringe is empty return failure Node  remove-first (fringe) if Node is a goal then return the path from initial state to Node else generate all successors of Node, and merge the newly generated nodes into fringe add generated nodes to the front of fringe End Loop Depth-First Search

Properties of depth-first search:

Properties of depth-first search Complete? No: fails in infinite-depth spaces, spaces with loops Modify to avoid repeated states along path  complete in finite spaces Time? O(b d ) : terrible if m is much larger than d Space? O(bm), i.e., linear space! Optimal? No

Depth limited search:

Depth limited search Like Depth first search, but the search is limited to a predefined depth . The depth of each state is recorded as it is generated. When picking the next state to expand, only those with depth less or equal than the current depth are expanded. Once all the nodes of a given depth are explored, the current depth is incremented.

Depth-limited search:

Depth-limited search = depth-first search with depth limit l , i.e., nodes at depth l have no successors Recursive implementation : Determine the vertex where the search should start and assign the maximum search depth Check if the current vertex is the goal state If not: Do nothing If yes: return Check if the current vertex is within the maximum search depth If not: Do nothing If yes: Expand the vertex and save all of its successors in a stack Call DLS recursively for all vertices of the stack and go back to Step 2

Iterative deepening search:

Iterative deepening search until solution found do DFS with depth cutoff c c = c+1

Iterative deepening search l =0:

Iterative deepening search l =0

Iterative deepening search l =1:

Iterative deepening search l =1

Iterative deepening search l =2:

Iterative deepening search l =2

Iterative deepening search l =3:

Iterative deepening search l =3

Iterative deepening search:

Iterative deepening search Number of nodes generated in a depth-limited search to depth d with branching factor b : N DLS = b 0 + b 1 + b 2 + … + b d-2 + b d-1 + b d Number of nodes generated in an iterative deepening search to depth d with branching factor b : N IDS = (d+1)b 0 + d b^ 1 + (d-1)b^ 2 + … + 3b d-2 +2b d-1 + 1b d For b = 10 , d = 5 , N DLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111 N IDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456 Overhead = (123,456 - 111,111)/111,111 = 11%

Properties of iterative deepening search:

Properties of iterative deepening search Complete? Yes Time? (d+1)b 0 + d b 1 + (d-1)b 2 + … + b d = O(b d ) Space? O(bd) Optimal? Yes, if step cost = 1

Summary of algorithms:

Summary of algorithms

Informed search algorithms:

Informed search algorithms

Outline:

Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search Simulated annealing search Local beam search Genetic algorithms

Best-first search:

Best-first search Idea: use an evaluation function f(n) for each node f(n) provides an estimate for the total cost. Expand the node n with smallest f(n). Implementation : Order the nodes in fringe increasing order of cost. Special cases: greedy best-first search A * search

Romania with straight-line dist.:

Romania with straight-line dist.

Greedy best-first search:

Greedy best-first search f(n) = estimate of cost from n to goal e.g., f SLD (n) = straight-line distance from n to Bucharest Greedy best-first search expands the node that appears to be closest to goal.

Greedy best-first search example:

Greedy best-first search example

Greedy best-first search example:

Greedy best-first search example

Greedy best-first search example:

Greedy best-first search example

Greedy best-first search example:

Greedy best-first search example

Properties of greedy best-first search:

Properties of greedy best-first search Complete? No – can get stuck in loops. Time? O(b m ) , but a good heuristic can give dramatic improvement Space? O(b m ) - keeps all nodes in memory Optimal? No

A* search:

A * search Idea: avoid expanding paths that are already expensive Evaluation function f(n) = g(n) + h(n) g(n) = cost so far to reach n h(n) = estimated cost from n to goal f(n) = estimated total cost of path through n to goal

A* search example:

A * search example

A* search example:

A * search example

A* search example:

A * search example

A* search example:

A * search example

A* search example:

A * search example

A* search example:

A * search example

Admissible heuristics:

Admissible heuristics A heuristic h(n) is admissible if for every node n , h(n) ≤ h * (n), where h * (n) is the true cost to reach the goal state from n . An admissible heuristic never overestimates the cost to reach the goal, i.e., it is optimistic Example: h SLD (n) (never overestimates the actual road distance) Theorem : If h(n) is admissible, A * using TREE-SEARCH is optimal

Optimality of A* (proof):

Optimality of A * (proof) Suppose some suboptimal goal G 2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G . We want to prove: f(n) < f(G2) (then A* will prefer n over G2) f(G 2 ) = g(G 2 ) since h (G 2 ) = 0 f(G) = g(G) since h (G) = 0 g(G 2 ) > g(G) since G 2 is suboptimal f(G 2 ) > f(G) from above

Optimality of A* (proof):

Optimality of A * (proof) Suppose some suboptimal goal G 2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G . f(G 2 ) > f(G) copied from last slide h(n) ≤ h*(n) since h is admissible ( under -estimate) g(n) + h(n) ≤ g(n) + h*(n) from above f(n) ≤ f(G) since g(n)+h(n)=f(n) & g(n)+h*(n)=f(G) f(n) < f(G2) from top line. Hence: n is preferred over G2

Consistent heuristics:

Consistent heuristics A heuristic is consistent if for every node n , every successor n' of n generated by any action a , h(n) ≤ c( n,a,n ') + h(n') If h is consistent, we have f(n') = g(n') + h(n') = g(n) + c( n,a,n ') + h(n') ≥ g(n) + h(n) = f(n) f(n’) ≥ f(n) Theorem : If h(n) is consistent, A * using GRAPH-SEARCH is optimal It’s the triangle inequality !

Optimality of A*:

Optimality of A * A * expands nodes in order of increasing f value. Gradually adds " f -contours" of nodes Contour i contains all nodes with f ≤ f i where f i < f i+1

Properties of A*:

Properties of A* Complete? Yes (unless there are infinitely many nodes with f ≤ f(G) , i.e. path-cost > ε ) Time/Space? Exponential except if: Optimal? Yes Optimally Efficient : Yes (no algorithm with the same heuristic is guaranteed to expand fewer nodes)

Memory Bounded Heuristic Search: Recursive BFS:

Memory Bounded Heuristic Search: Recursive BFS How can we solve the memory problem for A* search? Idea: Try something like depth first search, but let’s not forget everything about the branches we have partially explored. We remember the best f-value we have found so far in the branch we are deleting.

RBFS: :

RBFS: RBFS changes its mind very often in practice. This is because the f=g+h become more accurate (less optimistic) as we approach the goal. Hence, higher level nodes have smaller f-values and will be explored first. Problem: We should keep in memory whatever we can. best alternative over fringe nodes, which are not children: do I want to back up?

Simple Memory Bounded A*:

Simple Memory Bounded A* This is like A*, but when memory is full we delete the worst node (largest f-value). Like RBFS, we remember the best descendent in the branch we delete. If there is a tie (equal f-values) we first delete the oldest nodes first. simple-MBA* finds the optimal reachable solution given the memory constraint. Time can still be exponential.

Admissible heuristics:

Admissible heuristics E.g., for the 8-puzzle: h 1 (n) = number of misplaced tiles h 2 (n) = total Manhattan distance (i.e., no. of squares from desired location of each tile) h 1 (S) = ? h 2 (S) = ?

Admissible heuristics:

Admissible heuristics E.g., for the 8-puzzle: h 1 (n) = number of misplaced tiles h 2 (n) = total Manhattan distance (i.e., no. of squares from desired location of each tile) h 1 (S) = ? 8 h 2 (S) = ? 3+1+2+2+2+3+3+2 = 18

Dominance:

Dominance If h 2 (n) ≥ h 1 (n) for all n (both admissible) then h 2 dominates h 1 h 2 is better for search: it is guaranteed to expand less nodes. Typical search costs (average number of nodes expanded): d=12 IDS = 3,644,035 nodes A * (h 1 ) = 227 nodes A * (h 2 ) = 73 nodes d=24 IDS = too many nodes A * (h 1 ) = 39,135 nodes A * (h 2 ) = 1,641 nodes

Relaxed problems:

Relaxed problems A problem with fewer restrictions on the actions is called a relaxed problem The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem If the rules of the 8-puzzle are relaxed so that a tile can move anywhere , then h 1 (n) gives the shortest solution If the rules are relaxed so that a tile can move to any adjacent square, then h 2 (n) gives the shortest solution

Local search algorithms:

Local search algorithms State space = set of "complete" configurations Keep only current node in memory Local search is useful for solving optimization problems: Often it is easy to find a solution But hard to find the best solution

Example: n-queens:

Example: n -queens Put n queens on an n × n board with no two queens on the same row, column, or diagonal

Hill Climbing:

Hill Climbing Generate-and-test + direction to move . Heuristic function to estimate how close a given state is to a goal state. 114

Hill Climbing:

Hill Climbing Hill climbing is an optimization technique for solving computationally hard problems. Used in problems with “the property that the state description itself contains all the information” The algorithm is memory efficient since it does not maintain a search tree Hill climbing attempts to iteratively improve the current state by means of an evaluation function Searching for a goal state = Climbing to the top of a hill 115

Simple Hill Climbing:

Simple Hill Climbing Algorithm determine successors of current state choose successor of maximum goodness if goodness of best successor is less than current state's goodness, stop otherwise make best successor the current state and go to step 1 116

Hill Climbing (Gradient Search):

Hill Climbing (Gradient Search) Considers all the moves from the current state. Selects the best one as the next state. 117

Hill Climbing: Disadvantages:

Hill Climbing: Disadvantages Local maximum A state that is better than all of its neighbours, but not better than some other states far away. 118

Hill Climbing: Disadvantages:

Hill Climbing: Disadvantages Plateau A flat area of the search space in which all neighbouring states have the same value. 119

Hill-climbing search:

Hill-climbing search Problem: depending on initial state, can get stuck in local maxima

Hill Climbing: Conclusion:

Hill Climbing: Conclusion Can be very inefficient in a large, rough problem space. Global heuristic - computational complexity . Often useful when combined with other methods, getting it started right in the right general neighbourhood. 121

Simulated annealing search:

Simulated annealing search function SIM-ANNEALING(problem,schedule) current = INITIAL-STATE(problem) loop do temperature = schedule[t] next = randomly selected successor of current diff = VALUE(next) - VALUE(current) if diff > 0 then current = next else current = next only with probability end

Local beam search:

Local beam search Keep track of k states rather than just one. Start with k randomly generated states. At each iteration, all the successors of all k states are generated. If any one is a goal state, stop; else select the k best successors from the complete list and repeat.

Genetic algorithms:

Genetic algorithms A successor state is generated by combining two parent states Start with k randomly generated states ( population ) A state is represented as a string over a finite alphabet (often a string of 0s and 1s) Evaluation function ( fitness function ). Higher values for better states. Produce the next generation of states by selection, crossover, and mutation

PowerPoint Presentation:

Fitness function: number of non-attacking pairs of queens (min = 0, max = 8 × 7/2 = 28) 24/(24+23+20+11) = 31% 23/(24+23+20+11) = 29% etc fitness: #non-attacking queens probability of being regenerated in next generation

authorStream Live Help