Artificial Intelligence

Views:
 
Category: Education
     
 

Presentation Description

AI unit 2

Comments

Presentation Transcript

Slide 1:

Presented by :- Neelam Rawat Artificial Intelligence Unit II Neelam Rawat, AI UNIT - II

Slide 2:

UNIT – II Introduction to Search : Searching for solutions, Uniformed search strategies , Informed search strategies, Local search algorithms and optimistic problems, Adversarial Search, Search for games, Alpha - Beta pruning Neelam Rawat, AI UNIT - II

Slide 3:

Terms related with AI problem solution methodologies are: Problem: It is the question which is to be solved. For solving, a problem it needs to be precisely defined. The definition means, defining the start state, goal state, other valid states and transitions. Search space: It is the complete set of states including start and goal states, where the answer of the problem is to be searched. Search: It is the process of finding the solution in search space. The input to search space algorithm is problem and output is solution in form of action sequence. Well-defined problem: A problem description has three major components. Initial state, final state, space including transition function or path function. A path cost function assigns some numeric value to each path that indicates the goodness of that path. Sometimes a problem may have additional component in form of heuristic information Solution of the problem: A solution of the problem is a path from initial state to goal state. The movement from start states to goal states is guided by transition rules. Among all the solutions, whichever solution has least path cost is called optimal solution. Neelam Rawat, AI UNIT - II

Slide 4:

STATE SPACE SEARCH The state space representation forms the basis of most of the AI methods. Its structure corresponds to the structure of problem solving in two important ways: -- It allows for a formal definition of a problem as the need to convert some given situation into some desired situation using a set of permissible operations. -- It permits us to define the process of solving a particular problem as a combination of known techniques (each represented as a rule defining a single step in the space) and search, the general technique of exploring the space to try to find some path from current state to a goal state. -- Search is a very important process in the solution of hard problems for which no more direct techniques are available. Neelam Rawat, AI UNIT - II

Slide 5:

STATE SPACE SEARCH: STEPS Define a state space that contains all the possible configurations of the relevant objects. 2. Specify the initial states. 3. Specify the goal states. 4. Specify a set of rules: - What are unstated assumptions? - How general should the rules be? - How much knowledge for solutions should be in the rules? Neelam Rawat, AI UNIT - II

Slide 6:

PRODUCTION SYSTEM A knowledge representation formalism consists of collections of condition-action rules (production rules or operators), a database which is modified in accordance with the rules, and a production system interpreter which controls the operation of the rules, i.e., the ‘control mechanism’ of production system, determining the order in which production rules are fired. A system that uses this form of knowledge representation os called a production system. TYPES OF PRODUCTION SYSTEMS Monotonic production system Non-monotonic production system Partially commutative production system Commutative production system Neelam Rawat, AI UNIT - II

Slide 7:

Monotonic Production Systems MPS is the Production system in which the application of a rule never prevents the later application of another rule that could also have been applied at the time the first rule was applied, i.e., rules are independent Non monotonic Production Systems Condition applied for monotonic PS is not true here, i.e., rules are dependent Commutative Production system A partially Commutative production system has a property that if the application of a particular sequence of rules transform state x into state y, then any permutation of those rules that is allowable, also transforms state x into state y. A Commutative production system is a production system that is both monotonic and partially commutative. Partially Commutative Production system These production systems are useful for solving ignorable problems. Example: Theorem Proving They can be implemented without the ability to backtrack to previous states when it is discovered that an incorrect path has been followed. Neelam Rawat, AI UNIT - II

Slide 8:

PRODUCTION SYSTEM – ADVANTAGES Provide an excellent tool for structuring AI programs Highly modular because the individual rules can be added, removed or modified independently Expressed in natural form, so the statements contained in the knowledge base should the recording of an expert thinking out loud PRODUCTION SYSTEM – DISADVANATGES Very difficult to analyze the flow of control within production system because the individual rules don’t call each other EXAMPLE: 8 PUZZLE as Production system Production set Working Memory content -- Start State -- Goal State Control regime Neelam Rawat, AI UNIT - II

Slide 9:

PRODUCTION SYSTEM – EXAMPLE : 8 PUZZLE as Production system Neelam Rawat, AI UNIT - II

Slide 10:

PRODUCTION SYSTEM - COMPONENTS The set of production rules Working memory The recognize-act cycle. SET OF PRODUCTION RULES Used to define the problem knowledge Rules defined as condition is a pattern that defines when action may be taken WORKING MEMORY Contains a description of the current state of the world. Pattern of the description is matched against the condition part of the production rules to decide actions that may be taken. Memory contents may be altered as a result of the actions taken. Condition Action Neelam Rawat, AI UNIT - II

Slide 11:

Initialize the working memory with the initial problem description. Match patterns in working memory to conditions in the production rules. Rules with matched conditions form a conflict set. Conflict resolution is used to select a production rule from the conflict set. The selected is fired and the content of the working memory are changed. RECOGNIZE ACT CYCLE Neelam Rawat, AI UNIT - II

Slide 12:

EXAMPLES OF AI Tic- Tac -Toe Water Jug problem 8-Puzzle problem 8-Queens problem Chess problem Missionaries and Cannibals problem Tower of Hanoi problem Travelling salesperson problem Magic square Language understanding problem -- monkey and banana problem -- cryptarithmatic problem -- block world problem Neelam Rawat, AI UNIT - II

Slide 13:

TIC-TAC-TOE PROBLEM Tic-tac-toe is a game involving two players. It is played by puttting ‘X’ or ‘0’ alternatively by two players, in any one of the 9 board positions shown as below: 1 2 3 4 5 6 7 8 9 positions Neelam Rawat, AI UNIT - II

Slide 14:

TIC-TAC-TOE PROBLEM Playing means making the mark of ‘X’ or ‘0’ in any one square. The player who is able to make his marks in horizontal, vertical or diagonal straight line first, is declared as winner. From AI viewpoint, the problem of playing tic-tac-toe will be formulated as follows: The start state is all blank squares out of 9 squares. Player 1 can play in any one square. As the game proceeds, blank square remains the choice, which can be marked by players. The data structure used to represent the board is a 9-element vector, with element position shown as: Board position = {1,2,3,4,5,6,7,8,9}. An element contains the value 0, if the corresponding square is blank; 1 if filled with ‘0’ and 2 if filled with ‘X’. Hence starting state is {0,0,0,0,0,0,0,0,0} The goal state or winning combination will be board position having ‘0’ or ‘X’ separately in the combination of ({1,2,3}, {4,5,6}, {7,8,9}, {1,4,7}, {2,5,8}, {3,6,9}, {1,5,9}, {3,5,7} element values. Neelam Rawat, AI UNIT - II

Slide 15:

TIC-TAC-TOE PROBLEM Hence two goal state can be {2,0,1,1,2,0,0,0,2} and {2,2,2,0,1,0,1,0,0} These values corresponds to the goal state of: X X X 0 0 0 0 X X X START STATE GOAL STATE - 1 GOAL STATE - 2 Neelam Rawat, AI UNIT - II

Slide 16:

WATER-JUG PROBLEM “You are given two jugs, a 4-litre one and a 3-litre one. Neither has any measuring markers on it. There is a pump that can be used to fill the jugs with water. How can you get exactly 2 litres of water into 4-litre jug.” State: (x, y) x = 0, 1, 2, 3, or 4 y = 0, 1, 2, 3 Start state: (0, 0). Goal state: (2, n) for any n. Attempting to end up in a goal state. Neelam Rawat, AI UNIT - II

Slide 17:

4 Gallon Jug 3 Gallon Jug 0 0 4 0 1 3 1 0 0 1 4 1 2 3 4 Gallon jug 3 Gallon jug pump WATER-JUG PROBLEM Neelam Rawat, AI UNIT - II

Slide 18:

Sl No Current state Next State Descritpion 1 ( x,y ) if x < 4 (4,y) Fill the 4 gallon jug 2 (x,y) if y <3 (x,3) Fill the 3 gallon jug 3 (x,y) if x > 0 (x-d, y) Pour some water out of the 4 gallon jug 4 (x,y) if y > 0 (x, y-d) Pour some water out of the 3-gallon jug 5 (x,y) if x>0 (0, y) Empty the 4 gallon jug 6 (x,y) if y >0 (x,0) Empty the 3 gallon jug on the ground 7 (x,y) if x+y >= 4 and y >0 (4, y-(4-x)) Pour water from the 3 –gallon jug into the 4 –gallon jug until the 4-gallon jug is full Sl No Current state Next State Descritpion 1 (x,y) if x < 4 (4,y) Fill the 4 gallon jug 2 (x,y) if y <3 (x,3) Fill the 3 gallon jug 3 (x,y) if x > 0 (x-d, y) Pour some water out of the 4 gallon jug 4 (x,y) if y > 0 (x, y-d) Pour some water out of the 3-gallon jug 5 (x,y) if x>0 (0, y) Empty the 4 gallon jug 6 (x,y) if y >0 (x,0) Empty the 3 gallon jug on the ground 7 (x,y) if x+y >= 4 and y >0 (4, y-(4-x)) Pour water from the 3 –gallon jug into the 4 –gallon jug until the 4-gallon jug is full WATER-JUG PROBLEM Neelam Rawat, AI UNIT - II

Slide 19:

2 8 3 1 6 4 7 5 1 2 3 8 4 7 6 5 8 PUZZLE PROBLEM “It has a set of 3X3 board having 9 blocks spaces out of which, 8 blocks are having tiles bearing number 1 to 8. one space is left blank. The tile adjacent to blank space can move into it. We have to arrange the tiles in a sequence.” The start state is any situation of tiles, and goal state is tiles arranged in a specific sequence given below: START STATE GOAL STATE Neelam Rawat, AI UNIT - II

Slide 20:

8-QUEENS PROBLEM we have 8 queens and a 8X8 chess board having alternate black and white squares. The queens are placed on the chess board. Any queen attack any other queen placed on the same row, or column, or diagonal. We have to find the proper placement of queens on the chess board in such a way that no queen attacks other queen Neelam Rawat, AI UNIT - II

Slide 21:

CHESS GAME PROBLEM in a chess game problem, the start state is the initial configuration of chess board. The final or goal state is any board configuration which is the winning position for any player. Whenever any player moves any piece, it leads to different state of game. Each position can be described by an 8-by-8 array. Initial position is the game opening position. Goal position is any position in which the opponent does not have a legal move and his or her king is under attack. Legal moves can be described by a set of rules: - Left sides are matched against the current state. - Right sides describe the new resulting state. State space is a set of legal positions. Neelam Rawat, AI UNIT - II

Slide 22:

CHESS GAME PROBLEM It is estimated that the chess game has more than 10 to power 120 possible states. The game playing would mean finding (or searching) a sequence of valid moves which bring the board from start state to any of the possible final states BLACK WHITE INITIAL STATE of CHESS GAME Neelam Rawat, AI UNIT - II

Slide 23:

THE MISSIONARIES & CANNIBALS PROBLEM Three missionaries and three cannibals on the left bank of a river seek to cross the river to the right bank. They have a boat which can carry up to two people at a time. All missionaries and cannibals are able to row the boat. If at any time the cannibals outnumber the missionaries on either bank the cannibals will eat the missionaries M M M C C C Initial state M M M C C C Goal state Neelam Rawat, AI UNIT - II

Slide 24:

THE MISSIONARIES & CANNIBALS PROBLEM The objective of the solution is to find the sequence of their transfer from one bank of river to other using the boat . We can form various production rules as in water jug problem. Let Missionary is denoted by ‘M’ and Cannibal by ‘C’ Production Rules: Rule1: (0,M) : One missionary sailing the boat from bank1 to bank2 Rule2: (M,0) : One missionary sailing the boat from bank2 to bank1 Rule3: (M,M) : Two missionary sailing the boat from bank1 to bank2 Rule4: (M,M) : Two missionary sailing the boat from bank2 to bank1 Rule5: (M,C) : One missionary and one cannibal sailing the boat from bank1 to bank2 Rule6: (C,M) : One missionary and one cannibal sailing the boat from bank2 to bank1 Rule7: (C,C) : Two cannibal sailing the boat from bank1 to bank2 Rule8: (C,C) : Two cannibal sailing the boat from bank2 to bank1 Rule9: (0,C) : One cannibal sailing the boat from bank1 to bank2 Rule10: (C,0) : One cannibal sailing the boat from bank2 to bank1 Neelam Rawat, AI UNIT - II

Slide 25:

Bank1 Bank2 Boat Start state 3M, 3C 0 bank1 5 2M, 2C 1M, 1C bank2 2 3M, 2C 1C bank1 7 3M 3C bank2 10 3M, 1C 2C bank1 3 1M, 1C 2M, 2C bank2 6 2M, 2C 1M, 1C bank1 3 2C 3M, 1C bank2 10 3C 3M bank1 7 1C 3M, 2C bank2 10 2C 3M, 1C bank1 7 0 3M, 3C bank2 THE MISSIONARIES & CANNIBALS PROBLEM – Sequence of Rules Neelam Rawat, AI UNIT - II

Slide 26:

Initial State 1 2 3 A B C Goal State 1 2 3 A B C TOWER OF HANOI PROBLEM There are three pegs & three disks of differing sizes (A, B, C). The disks have holes in them so they can be stacked on the pegs. The disks can be moved from any peg to another peg. Only the top disk on a peg can be moved, and it can never be placed on a smaller disk. The disks start out on peg 1, but the goal is to move them all to peg 3, one disk at a time, by means of transferring disks among pegs. Neelam Rawat, AI UNIT - II

Slide 27:

TRAVELLING SALESPERSON PROBLEM “A salesman has a list of cities, each of which he must visit exactly once. There are direct roads between each pair of cities on the list. Find the route the salesman should follow for the shortest possible round trip that both starts and finishes at any one of the cities.” A B C D E 1 10 5 5 5 15 Neelam Rawat, AI UNIT - II

Slide 28:

8 3 4 1 6 5 9 7 2 MAGIC SQUARE PROBLEM We are given a square of same number of rows and columns, where consecutive numbers are filled in blank spaces of each row and column such that numbers in each row, column and two diagonals, add up to the same number. The numbers have to be consecutive and one number can be used only once. Board Layout as magic square. Each row, column and diagonals add to 15. Neelam Rawat, AI UNIT - II

Slide 29:

Neelam Rawat, AI UNIT - II LANGUAGE UNDERSTANDING PROBLEM Monkey and Banana problem Cryptarithmetic Problem Block World Problem

Slide 30:

Neelam Rawat, AI UNIT - II MONKEY & BANANA PROBLEM A monkey and a bunch of banana are present in a room. The bananas are hanging from the ceiling. The monkey cannot reach the bananas directly. However, in the room there is one chair and a stick. The monkey can reach the banana standing on the chair. We have to find the sequence of events by which monkey can reach the bananas INITIAL STATE GOAL STATE

Slide 31:

Neelam Rawat, AI UNIT - II CRYPTARITHMATIC PUZZLE PROBLEM It is a puzzle involving decoding of digit represented by the character. It is in the form of some arithmetic equation where digits are distinctly represented by some character. One such problem is shown below: R I N G D O O R B E L L These types of problem require constraint satisfaction. Constraints are that all the laws of arithmetic must hold good and any letter should represent same digit wherever it comes in the puzzle +

Slide 32:

Neelam Rawat, AI UNIT - II BLOCK WORLD PROBLEM There are some cubic blocks, out of which, each is placed either on the table or on other block forming a particular configuration. We have to move the blocks to arrange them to form some other given configuration by applying minimum number of moves. The requirement of moving blocks is that only the top block from a group can be shifted in a move. It can be placed either on the table or on the top of any other block A C A C B B START GOAL Robot hand Robot hand

Slide 33:

Neelam Rawat, AI UNIT - II SEARCH TECHNIQUES The process of the solution of AI problem searches the path from start state to goal state. Search techniques help in finding most viable path to reach the goal state and also make the entire process efficient and economical. Broadly, the search is of two types: BLIND or UNGUIDED or UNINFORMED search HEURISTIC or GUIDED or INFORMED search

Slide 34:

UNINFORMED SEARCH – DEFINITION Uninformed search uses no information about the likely “direction” of the goal node(s). It uses only the information available in the problem definition. -- Initial state -- Search operators -- Goal Test -- Path Cost Neelam Rawat, AI UNIT - II

Slide 35:

UNINFORMED SEARCH – METHODS/TECHNIQUES Breadth First Search (BFS) Depth First Search (DFS) Depth Limited Search -- Iterative deepening Search(IDS) Bidirectional Neelam Rawat, AI UNIT - II

Slide 36:

BREADTH FIRST SEARCH In this type of search, the state space is represented in the form of a tree. The solution is obtained by traversing through tree. The nodes of tree represent start state, various intermediate state and goal state. Breadth-first search goes through the tree level by level, visiting all of the nodes on the top level first, then all the nodes on the second level, and so on. The fringe is a FIFO QUEUE i.e. new successors go at end. ROOT NODE LEVEL 1 LEVEL 2 LEVEL 3 Neelam Rawat, AI UNIT - II

Slide 37:

Neelam Rawat, AI UNIT - II A D C B E F G H I J K L In this given , the nodes will be expanded in the order of A, B, C, D, E, F, G, H, I, J, K , L START BREADTH FIRST SEARCH

Slide 38:

Neelam Rawat, AI UNIT - II BREADTH FIRST SEARCH - ALGORITHM Step 1: Put the initial node on a list START Step 2: If (START is empty) or (START = GOAL), then terminate search Step 3: Remove the first node from START. Call this node ‘a’ Step 4: If (a = GOAL), terminate search with success Step 5: Else if node ‘a’ has successors, generate all of them and add them at the tail of START Step 6: GOTO step 2

Slide 39:

Neelam Rawat, AI UNIT - II BREADTH FIRST SEARCH Time complexity of BFS = O(b d ) Space complexity of BFS = O(b d ) FIGURE - 1

Slide 40:

Neelam Rawat, AI UNIT - II BREADTH FIRST SEARCH BFS is complete-if the shallowest goal node is at some finite depth d, breadth-first search will eventually find it after expanding all shallower nodes (provided the branching factor b is finite). The shallowest goal node is not necessarily the optimal one; technically, breadth-first search is optimal if the path cost is a non-decreasing function of the depth of the node. (For example, when all actions have the same cost.) FIGURE 1, It lists the time and memory required for a breadth-first search with branching factor b = 10, for various values of the solution depth d. The table assumes that 10,000 nodes can be generated per second and that a node requires 1000 bytes of storage. Many search problems fit roughly within these assumptions (give or take a factor of 100) when run on a modern personal computer.

Slide 41:

Neelam Rawat, AI UNIT - II The memory requirements are a bigger problem for breadth first search than is the execution time. 31 hours would not be too long to wait for the solution to an important problem of depth 8, but few computers have the terabyte of main memory it would take. Fortunately, there are other search strategies that require less memory. The second lesson is that the time requirements are still a major factor. If your problem has a solution at depth 12, then (given our assumptions) it will take 35 years for breadth-first search (or indeed any uninformed search) to find it. In general, exponential-complexity search problems cannot be solved by uninformed methods for any but the smallest instances BREADTH-FIRST SEARCH - DISADVANTAGES

Slide 42:

Neelam Rawat, AI UNIT - II BREADTH-FIRST SEARCH – ADVANTAGES In situations where solution exists, the breadth first search is guaranteed to find it. Besides this, in the situation where there are multiple solutions, the BFS finds the minimal solution. The minimal solution is that which requires the minimum number of steps If the goal state is found during the search of shorter paths, longer paths would not be required to be searched, saving time and efforts. Example: TRAVELELLING SLAESPERSON problem can be solved using BFS. It simply explore all the paths possible in the trees and will ultimately come out with the shortest path desired. However, this strategy works well only if the number of cities are less.

Slide 43:

BREADTH-FIRST SEARCH – 8 PUZZLE Neelam Rawat, AI UNIT - II

Slide 44:

Neelam Rawat, AI UNIT - II DEPTH-FIRST SEARCH In this type of search, we can explore one branch of a tree until the solution is found or we decide to terminate the search because either a dead end is met, some previous state is encountered or the process becomes longer than the set time limit. If any if the above situation is encountered and the process is terminated , a backtracking occurs. A fresh search will be done on some other branch of the tree and the process will be repeated until goal state is reached. This type of technique is known as Depth-First Search technique. It generates the left most successor of root node and expands it, until dead end is reached. The search proceeds immediately to the deepest level of search tree, or until the goal is encountered.

Slide 45:

Neelam Rawat, AI UNIT - II A L H J I K G D C B E F The sequence of explored nodes in the given tree will be A, B, E, F, C, G, H, K, L, D, I, J As DFS stores only the current path itt is pursuing. Hence the space complexity is a linear function of the depth. Thus, SPACE COMPLEXITY = O(d) Time complexity of DFS is same as BFS, TIME COMPLEXITY = O(b d ) DEPTH-FIRST SEARCH

Slide 46:

Neelam Rawat, AI UNIT - II DEPTH-FIRST SEARCH – ALGORITHM Step 1: Put the initial node on a list START Step 2: If (START is empty) or (START = GOAL), then terminate search Step 3: Remove the first node from START. Call this node ‘a’ Step 4: If (a = GOAL), terminate search with success Step 5: Else if node ‘a’ has successors, generate all of them and add them at the beginning of START Step 6: GOTO step 2 NOTE: This strategy can be implemented with LAST IN FIRST OUT (LIFO) queue, also known as STACK

Slide 47:

Neelam Rawat, AI UNIT - II DEPTH-FIRST SEARCH - ADVANTAGES The advantage of depth-first Search is that memory requirement is only linear with respect to the search graph. This is in contrast with breadth-first search which requires more space. The reason is that the algorithm only needs to store a stack of nodes on the path from the root to the current node. The time complexity of a depth-first Search to depth d is O(b d ) since it generates the same set of nodes as breadth-first search, but simply in a different order. Thus practically depth-first search is time-limited rather than space-limited. If depth-first search finds solution without exploring much in a path then the time and space it takes will be very less.

Slide 48:

Neelam Rawat, AI UNIT - II DEPTH-FIRST SEARCH – DISADVANTAGES The disadvantage of Depth-First Search is that there is a possibility that it may go down the left-most path forever. Even a finite graph can generate an infinite tree. One solution to this problem is to impose a cutoff depth on the search. Although the ideal cutoff is the solution depth d and this value is rarely known in advance of actually solving the problem. If the chosen cutoff depth is less than d, the algorithm will fail to find a solution, whereas if the cutoff depth is greater than d, a large price is paid in execution time, and the first solution found may not be an optimal one. Depth-First Search is not guaranteed to find the solution. And there is no guarantee to find a minimal solution, if more than one solution exists.

Slide 49:

DEPTH-FIRST SEARCH – 8 PUZZLE Neelam Rawat, AI UNIT - II

Slide 50:

Neelam Rawat, AI UNIT - II DEPTH-LIMITED SEARCH The problem of bounded trees can be alleviated by supplying DFS with predetermined depth limit l. That is, nodes at depth l are treated as if they have no successors. This approach is called depth-limited search. Depth-First search, DFS is a special case of depth-limited search with l=∞ The depth-limit solves the infinite path problem. Unfortunately, it also introduces an additional source of incompleteness if we choose l<d, i.e., the shallowest goal is beyond the depth limit Depth-limited search will also be non-optimal if we choose l>d. its time complexity will be O(b l ) and its space complexity is O(bl)

Slide 51:

Neelam Rawat, AI UNIT - II DEPTH-FIRST ITERATIVE DEEPENING SEARCH Depth-First Iterative Deepening search first performs a depth-first search to depth one, then starts over, executing a complete depth-first search to depth two, and continues to run depth-first searches to successively greater depths, until a solution is found. Depth-First Iterative Deepening (DFID) search combines the best features ofbreadth -first search and depth-first search. Like DFS, its memory requirements are very modest O(bd) to be precise, and like BFS, it is complete when branching factor is finite and optimal when path cost is non-decreasing function of depth of the node. TIME COMPLEXITY = O(b d ) SPACE COMPLEXITY = O(bd)

DEPTH-FIRST ITERATIVE DEEPENING SEARCH - Example:

DEPTH-FIRST ITERATIVE DEEPENING SEARCH - Example Neelam Rawat, AI UNIT - II

DEPTH-FIRST ITERATIVE DEEPENING SEARCH - Example:

DEPTH-FIRST ITERATIVE DEEPENING SEARCH - Example Neelam Rawat, AI UNIT - II

DEPTH-FIRST ITERATIVE DEEPENING SEARCH - Example:

DEPTH-FIRST ITERATIVE DEEPENING SEARCH - Example Neelam Rawat, AI UNIT - II

DEPTH-FIRST ITERATIVE DEEPENING SEARCH - Example:

DEPTH-FIRST ITERATIVE DEEPENING SEARCH - Example Neelam Rawat, AI UNIT - II

Slide 56:

Neelam Rawat, AI UNIT - II DEPTH-FIRST ITERATIVE DEEPENING SEARCH Number of nodes generated in a depth-limited search to depth d with branching factor b: NDLS = b0 + b1 + b2 + … + bd-2 + bd-1 + bd Number of nodes generated in an iterative deepening search to depth d with branching factor b: NIDS = (d+1)b0 + d b1 + (d-1)b2 + … + 3bd-2 +2bd-1 + 1bd For b = 10, d = 5, NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111 NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456 Overhead = (123,456 - 111,111)/111,111 = 11%

Slide 57:

Neelam Rawat, AI UNIT - II BI-DIRECTIONAL SEARCH The idea behind bidirectional search is to run two simultaneous searches – one forward from the initial state and the other backward from the goal, stopping down when the two searches meet in the middle

Slide 58:

Neelam Rawat, AI UNIT - II C B A D J I K G E L H F Search starting from initial state Search starting from goal state

Slide 59:

Neelam Rawat, AI UNIT - II BI-DIRECTIONAL SEARCH The bidirectional search is advantageous for unidirectional search, because for a tree of depth ‘d’ and branching factor ‘b’, the unidirectional search requires b d number of comparisons but bidirectional search requires b d/2 comparisons in each direction. As b d/2 +b d/2 < b d The bidirectional search has lesser time complexity. However it is applicable to only on those situations where search tree is completely defined, i.e., all actions in state space are reversible. Space Complexity = O(b d/2) Time complexity = O(b d/2) NOTE: If both searches are BFS then solution would be complete and optimal, but if others then sacrifice with completeness and optimality.

Slide 60:

Neelam Rawat, AI UNIT - II ADVANTAGES The merit of bidirectional search is its speed. Sum of the time taken by two searches (forward and backward) is much less than the O(bd) complexity. It requires less memory. DISADVANTAGES Implementation of bidirectional search algorithm is difficult because additional logic must be included to decide which search tree to extend at each step. One should have known the goal state in advance. The algorithm must be too efficient to find the intersection of the two search trees. It is not always possible to search backward through possible states.

Slide 61:

SEARCH STRATEGIES Criteria Completeness Optimality Complexity Possibility to backtrack Informedness Neelam Rawat, AI UNIT - II

Slide 62:

BLIND SEARCH Depth-first search and breadth-first search are examples of blind (or uninformed) search strategies. Breadth-first search produces an optimal solution (eventually, and if one exists), but it still searches blindly through the state-space. Neither uses any knowledge about the specific domain in question to search through the state-space in a more directed manner. If the search space is big, blind search can simply take too long to be practical, or can significantly limit how deep we're able to look into the space. Neelam Rawat, AI UNIT - II

Slide 63:

Be Not Afraid Of Falling Be Afraid Of Not Trying Go Ahead With Question Neelam Rawat, AI UNIT - II

Slide 64:

Neelam Rawat, AI UNIT - II HEURISTIC SEARCH The assumption behind blind search is that we have no way of telling whether a particular search direction is likely to lead us to the goal or not. The key idea behind informed or heuristic search algorithms is to exploit a task specific measure of goodness to try to either reach the goal more quickly or find a more desirable goal state. Heuristic: From the Greek for “find”, “discover”. Heuristics are criteria, methods, or principles for deciding which among several alternative courses of action promises to be the most effective in order to achieve some goal”. Judea Pearl “ Heuristics” 1984 Problem knowledge comes in the form of an heuristic function h'(n) -- h'(n) assigns to each node n an estimate of the optimal path length from n to a goal node -- we assume that h' is non-negative and that h'(n)=0 iff n is a goal node

Slide 65:

Neelam Rawat, AI UNIT - II HEURISTIC SEARCH: Example for the game of 8 puzzle problem: h'1(n) = number of misplaced tiles h'2(n) = sum of Manhattan distance* of each tile Example: h'1(n) = 7 h'2(n) = 4+0+2+3+1+3+1+3=17 Given two planar points of coordinate P1=(x1,y1) and P2=(x2,y2), the Manhattan distance is defined as: D(P 1 , P 2 ) = |x 2 – x 1 | + |y 2 – y 1 |

Slide 66:

INFORMED SEARCH / HEURISTIC SEARCH - ADVANTAGES A search strategy which searches the most promising branches of the state-space first can: -- find a solution more quickly, -- find solutions even when there is limited time available, -- often find a better solution, since more profitable parts of the state-space can be examined, while ignoring the unprofitable parts. A search strategy which is better than another at identifying the most promising branches of a search-space is said to be more informed . Neelam Rawat, AI UNIT - II

Slide 67:

Neelam Rawat, AI UNIT - II HEURISTIC or GUIDED or INFORMED search techniques : Generate and Test Best First Search -- Greedy best first search -- Optimal search and A* algorithm -- Recursive best first search Problem reduction -- AND-OR graph -- AO* algorithm Local search algorithm -- Beam Search -- Branch and bound search Hill Climbing Constraint Satisfaction Means Ends Analysis Mini-max search Alpha-Beta pruning

Slide 68:

Neelam Rawat, AI UNIT - II GENERATE AND TEST This approach is to generate the result and test whether it is the desired solution of the given problem. ALGORITHM: Do the following until a satisfactory solution is obtained or no more solutions can be generated. Step 1: Generate a possible solution Step 2: Test to see if this is a solution Step 3: If a solution has been found, quit. Otherwise return to step 1.

Slide 69:

Neelam Rawat, AI UNIT - II GENERATE AND TEST – METHODS 1 method: Generate random solution and test it. If found wrong, create another solution and test it again 2 method: Systematic approach of generating a solution is applied and sure of getting the right solution if it exists. But the difficulty might be that it take lot of time if problem space is large

Slide 70:

Neelam Rawat, AI UNIT - II BEST FIRST SEARCH It is the general heuristic based search technique. In best-first search, in the graph of problem representation, one evaluation function ( which corresponds to heuristic function) is attached with every node. The value of evaluation function depends on the cost or distance of current node from the goal node. The decision of which node to be expanded depends upon the value of this evaluation function.

Slide 71:

Neelam Rawat, AI UNIT - II I J 6 5 BEST FIRST SEARCH Tree structure for best-first search is given below: A B C D E F G 7 K L M 3 6 5 9 8 12 14 H 1 0 2 START NODE GOAL NODE EVALUATION FUNCTION VALUE 5

Slide 72:

Neelam Rawat, AI UNIT - II STEP # NODE BEING EXPANDED CHILDREN AVAILABLE NODES NODE CHOSEN 1 S (A:3), (B:6), (C:5) (A:3), (B:6), (C:5) (A:3) 2 A (D:9), (E:8) (B:6), (C:5), (D:9), (E:8) (C:5) 3 C (H:7) (B:6), (D:9), (E:8), (H:7) (B:6) 4 B (F:12), (G:14) (D:9), (E:8), (H:7), (F:12), (G:14) (H:7) 5 H (I:5), (J:6) (D:9), (E:8), (F:12), (G:14), (I:5), (J:6) (I:5) 6 I (K:1), (L:0), (M:2) (D:9), (E:8), (F:12), (G:14), (J:6), (K:1), (L:0), (M:2) (L:0) BEST FIRST SEARCH: Steps of search process are given as: NOTE: at any step, the most promising node having least value of evaluation function is chosen for expansion

Slide 73:

Neelam Rawat, AI UNIT - II BEST FIRST SEARCH – ALGORITHM Step 1: Put the initial node on a list START Step 2: If (START is empty) or (START = GOAL), then terminate search Step 3: Remove the first node from START. Call this node ‘a’ Step 4: If (a = GOAL), terminate search with success Step 5: Else if node ‘a’ has successors, generate all of them. Find out how far they are from the goal node. Sort all the children generated so far by remaining distance from the goal Step 6: Name this list START 1 Step 7: Replace START with START 1 Step 8: GOTO step 2

Slide 74:

Neelam Rawat, AI UNIT - II GREEDY BEST FIRST SEARCH: The greedy best-first search technique expands the node which is closest to the goal. The node expression is purely based on the distance from goal. It is used in route finding problems. This technique has the limitation that it is not optimal and it is incomplete as it can start on an infinite path and never return to other possibilities.

Slide 75:

Neelam Rawat, AI UNIT - II A* ALGORITHM The A* algorithm is the special case of best-first search. It provides general guidelines about how to estimate goal distances for general search graphs. At each node along a path to the goal node, the A* algorithm generates all the successor nodes and computes an estimate of the distance (cost) from the start node through each of the successors. It then chooses the successor with the shortest estimated distance for expansion . It calculates the heuristic function based on distance of current node from start state and distance of current node to goal node. The form of heuristic estimation function for A* is f(n) = g(n) + h(n) Where, f(n) = evaluation function g(n) = cost (or distance) of current node from the start node h(n) = cost of current node from goal node

Slide 76:

Neelam Rawat, AI UNIT - II A* ALGORITHM – SEARCH TREE A B C D E F G 7 K L M 3 6 5 9 8 12 H 1 0 2 START NODE GOAL NODE EVALUATION FUNCTION VALUE 6 8 11 14 13 18 18 23 23 22 20 20 3 2 6 2 2 4 3 5 7 1 6 2 2 I J 5 6 COST FUNCTION VALUE ESTIMATION FUNCTION VALUE

Slide 77:

Neelam Rawat, AI UNIT - II STEP # NODE BEING EXPANDED CHILDREN AVAILABLE NODES NODE CHOSEN 1 S (A:6), (B:8), (C:11) (A:6), (B:8), (C:11) (A:6) 2 (A:6) (D:14), (E:13) (B:8), (C:11), (D:14), (E:13) (B:8) 3 (B:8) (F:18), (G: NA) (C:11), (D:14), (E:13), (F:18) (C:11) 4 (C:11) (H:18) (D:14), (E:13), (F:18), (H:18) (E:13) 5 (E:13) - ( D:14), (F:18), (H:18), (D:14) - - - - -

Slide 78:

Neelam Rawat, AI UNIT - II A* ALGORITHM Step 1: Put the initial node on a list START Step 2: If (START is empty) or (START = GOAL), then terminate search Step 3: Remove the first node from START. Call this node ‘a’ Step 4: If (a = GOAL), terminate search with success Step 5: Else if node ‘a’ has successors, generate all of them. Estimate the estimation function value of the successors by totaling the evaluation function value and the cost function value. Sort the list by fitness number Step 6: Name this list START 1 Step 7: Replace START with START 1 Step 8: GOTO step 2 NOTE: This algorithm is admissible, i.e., the algorithm is sure to find the most optimal path if one exists

Slide 79:

Neelam Rawat, AI UNIT - II RECURSIVE BEST FIRST SEARCH It is a simple recursive algorithm , which is the combination of standard best first search and recursive depth first search. The difference of it from standard algorithm is that it uses only linear space. Like recursive depth-first search, it expands the nodes towards depth but for further expansion of any node, it follows the pattern of best first search

Slide 80:

Neelam Rawat, AI UNIT - II PROBLEM REDUCTION TECHNIQUE: AND-OR GRAPH The are certain AI problems which can be decomposed into several other problems. AND-OR graph is useful for finding the solution of such problems. The problems are solved by breaking the problem into a set of smaller sub-problems, all of which must be solved in order to solve the complete problem. In the tree representation of state space search, such sub problems generated from a main problem create ‘AND’ arc. One arc may point to any number of successor nodes all of which must be solved in order to get the complete solution.

Slide 81:

Neelam Rawat, AI UNIT - II PROBLEM REDUCTION TECHNIQUE: AND-OR GRAPH np S v vp np n art n n art n man dog man dog a the likes bites man man dog dog a the AND-OR graph for parse tree generation in English language

Slide 82:

Neelam Rawat, AI UNIT - II AND-OR GRAPH

Slide 83:

Neelam Rawat, AI UNIT - II AO* ALGORITHM – AO* algorithm expanded nodes are re-examined so that the current best path can be selected. The working of AO* algorithm is illustrated in figure as follows:

Slide 84:

Neelam Rawat, AI UNIT - II AO* ALGORITHM Referring the figure. The initial node is expanded and D is Marked initially as promising node. D is expanded producing an AND arc E-F. f ' value of D is updated to 10. Going backwards we can see that the AND arc B-C is better . it is now marked as current best path. B and C have to be expanded next. This process continues until a solution is found or all paths have led to dead ends, indicating that there is no solution. An A* algorithm the path from one node to the other is always that of the lowest cost and it is independent of the paths through other nodes. The algorithm for performing a heuristic search of an AND - OR graph is given below. Unlike A* algorithm which used two lists OPEN and CLOSED, the AO* algorithm uses a single structure G. G represents the part of the search graph generated so far. Each node in G points down to its immediate successors and up to its immediate predecessors, and also has with it the value of h' cost of a path from itself to a set of solution nodes. The cost of getting from the start nodes to the current node "g" is not stored as in the A* algorithm. This is because it is not possible to compute a single such value since there may be many paths to the same state. In AO* algorithm serves as the estimate of goodness of a node. Also a there should value called FUTILITY is used. The estimated cost of a solution is greater than FUTILITY then the search is abandoned as too expansive to be practical.

Slide 85:

Neelam Rawat, AI UNIT - II AO* ALGORITHM Step 1: Place start node ‘s’ an open list Step 2: Using the search tree constructed so far, compute most promising solution Step 3: select a node n and remove n from open and place it on closed Step 4: if n is a terminal goal node, label n as solved if solution of n results in any of n’s ancestors being solved. Label all the ancestors as solved if start node s is solved. Exit with success. Remove from open all nodes with a solved ancestor NOTE: The main difference between the A*(A star) and AO*(AO star) algorithms is that A* algo is a OR graph algorithm and AO* is a AND-OR graph algorithm. In OR graph algorithm it just find only one solution ( i.e either OR solution means this OR this OR this). But in the AND-OR graph algo it find more than one solution by ANDing two or more branches.

Slide 86:

Neelam Rawat, AI UNIT - II LOCAL SEARCH ALGORITHM Algorithms which operate by using a single current state and generally move only to neighbors of that state. In the process used in applying algorithms, the paths followed by the search are not retained. Advantages: Memory requirement is less Normally find reasonably fast solution in big search space ALGORITHM: Step 1: Start with initial state. If it is not the goal state, go to step 2 Step 2: Repeat the following till either goal state ss reached or applicable operators are over: Select yet unapplied operator, apply it to produce new state, which moves in the direction of increasing value of evaluation function Evaluate new state. If it is goal state or no operator generates state having value of objective function more than previous one, then terminate

Slide 87:

Neelam Rawat, AI UNIT - II BRANCH AND BOUND SEARCH This is a type of local search algorithm. It applies to problems having a graph search space where more than one alternate path exists between two nodes. The example of such type of problem is travelling salesperson problem. It is called branch and bound as each move in this search process generates one path at a time, keeping track of the best circuit or closed path generated so far. It stores this value as “bound” for future use. If the algorithm determines that the best possible extension to the branch will have a greater cost that bound, it eliminates the partial paths and all its extensions. It reduces search space from N! to 1.26 N

Slide 88:

Neelam Rawat, AI UNIT - II BRANCH AND BOUND SEARCH – ALGORITHM Step 1: Place the start node of zero path length on the queue Step 2: Until the queue is empty or a goal state has been found: Determine if the first path in the queue contains a goal node If the first path contains a goal node exit with success If the first path does not contain a goal node, remove the path and form new paths by extending removed path by one step Compute the cost of new paths and add them to the queue Sort the path or the queue Step 3: Otherwise exit with failure

Slide 89:

Neelam Rawat, AI UNIT - II BEAM SEARCH Beam search permits searching to be done on a multiprocessor machine, thereby reducing the computations. Reduction in computation is achieved by purging some paths and selecting only selected paths. It is restricted in the sense that the amount of memory available for storing the set of alternative search nodes is limited, and in the sense that non-promising nodes can be pruned at any step in the search. The pruning of non-promising nodes is determined by problem-specific heuristics. The set of most promising, or best alternative, search nodes is called the “beam”. A beam search takes three components as its input: a problem to be solved, a set of heuristic rules for pruning, and a memory with a limited available capacity. The problem is the problem to be solved, usually represented as a graph, and contains a set of nodes in which one or more of the nodes represents a goal. The set of heuristic rules are rules specific to the problem domain and prune unfavorable nodes from the memory in respect to the problem domain. The memory is where the “beam” is stored, where when memory is full and a node is to be added to the beam, the most costly node will be deleted, such that the memory limit is not exceeded.

Slide 90:

Neelam Rawat, AI UNIT - II BEAM SEARCH - TREE STRUCTURE Width of beam, w =3 8 3 5 9 10 A B C D E ROOT At each level, heuristic functions are applied to reduce the number of paths to be explored. The width of the beam is fixed and whatever be the depth of the tree, the number of alternatives to be scanned in the product of the width and the depth. In the above tree, B and C nodes discarded to make it close to the value of the width of the beam, w = 3

Slide 91:

Neelam Rawat, AI UNIT - II BEAM SEARCH – EXAMPLE :

Slide 92:

Neelam Rawat, AI UNIT - II ALGORITHM FOR BEAM SEARCH: Step 1: Let the width of beam = w Step 2: Put the initial node on a list START Step 3: If (START = EMPTY) or (START = GOAL), terminate search Step 4: Remove the first node from START, call this node ‘a’ Step 5: If (a = GOAL), terminate search with success Step 6: Else if node ‘a’ has successors, generate all of them and add them at the tail of START Step 7: Use heuristic function to rank and sort all elements of START Step 8: Determine nodes to be expanded. The number if nodes should not be greater than w. Name this as START 1. Step 9: Replace START with START 1 Step 10: GOTO step 2

Slide 93:

Neelam Rawat, AI UNIT - II BEAM SEARCH – ADVANTAGES & DISADVANTAGES Beam search has the advantage of potentially reducing the computation, and hence the time, of a search.  As well, the memory consumption of the search is far less than its underlying search methods. The main disadvantages of a beam search are that the search may not result in an optimal goal and may not even reach a goal at all. In fact, the beam search algorithm terminates for two cases: a required goal node is reached, or a goal node is not reached and there are no nodes left to be explored . beam search has found success in the practical areas of speech recognition, vision, planning, and machine learning

Slide 94:

Neelam Rawat, AI UNIT - II HILL CLIMBING This algorithm is also called discrete optimization algorithm, uses a simple heuristic function where the amount of distance the node is from the goal. The ordering of choices is a heuristic measure of the remaining distance one has to traverse the goal state. In practical, there is no difference between hill climbing method and DFS technique except that the children of the node that has been expanded are sorted by the remaining distance.

Slide 95:

Neelam Rawat, AI UNIT - II HILL CLIMBING – ALGORITHM Step 1: Put the initial node on a list START Step 2: If (START is empty) or (START = GOAL), then terminate search Step 3: Remove the first node from START. Call this node ‘a’ Step 4: If (a = GOAL), terminate search with success Step 5: Else if node ‘a’ has successors, generate all of them. Find out how far they are from the goal node. Sort them by the remaining distance from the goal and add them to the beginning of START Step 6: GOTO step 2

Slide 96:

Neelam Rawat, AI UNIT - II HILL CLIMBING – SEARCH TREE 3 2.9 START GOAL NODE A 8 7 B C B E F 2.7 2

HILL CLIMBING - EXAMPLE:

HILL CLIMBING - EXAMPLE 2 8 3 1 6 4 7 5 2 8 3 1 4 7 6 5 2 3 1 8 4 7 6 5 1 3 8 4 7 6 5 2 3 1 8 4 7 6 5 2 1 3 8 4 7 6 5 2 start goal -5 h = -3 h = -3 h = -2 h = -1 h = 0 h = -4 -5 -4 -4 -3 -2 f ( n ) = -(number of tiles out of place) Neelam Rawat, AI UNIT - II

Slide 98:

Neelam Rawat, AI UNIT - II HILL CLIMBING – USE Hill climbing technique is being used in some activity on our day-to-day chore. Example, while lisening to somebody playing flute on the transition tone and volume control are adjusted in a way that makes the music melodious PROBLEMS IN HILL CLIMBING Local Maxima : peaks that aren’t the highest point in the space Plateaus: the space has a broad flat region that gives the search algorithm no direction (random walk) Ridges: flat like a plateau, but with drop-offs to the sides; steps to the North, East, South and West may go down, but a step to the NW may go up. Image from: http://classes.yale.edu/fractals/CA/GA/Fitness/Fitness.html local maximum ridge plateau

Slide 99:

Neelam Rawat, AI UNIT - II PROBLEMS IN HILL CLIMBING Problems: local maxima, plateaus, ridges Remedies: Random restart: keep restarting the search from random locations until a goal is found. Problem reformulation : reformulate the search space to eliminate these problematic features Note: Some problem spaces are great for hill climbing and others are terrible

Slide 100:

Neelam Rawat, AI UNIT - II CONSTRAINT SATISFACTION Constraint satisfaction are for those constraints solution without violating the constraints. For solving problems in this area, human beings use extensive domain specific and heuristic knowledge. Suggestive examples for constraint satisfaction as design problems in manufacturing, movement of semi-furnished goods in assembly lines etc. Cryptarithmetic problems are typical constraint satisfaction problem. To explain it, lets consider the example: S E N D + M O R E M O N E Y Here, the constraints are: -- all alphabets used are to have different numeric values -- since addition operation is involved, rules of addition are to be adhered to

Slide 101:

Neelam Rawat, AI UNIT - II The solution is to find the values of the alphabets M, O, N, E, Y, S, R, and D. It is possible to find the values of the alphabets either by brute force method or minimize the search space by applying the constraints of the problem. Given below the program to solve it by brute force search method The problem can be re-written as: SEND + MORE = MONEY OR, SEND + MORE – MONEY = 0

Slide 102:

Neelam Rawat, AI UNIT - II MEANS-ENDS-ANALYSIS Means Ends Analysis, considering the present position as the current state and the objective as the goal state. It considers the differences between these two states and finds ways of reducing the difference between them. The current state embraces all the facts known about the present state including a specification of the current problem. Consider the following examples. In a travel problem the current state and the goal state are defined by physical locations. In a robot assembly problem the current state and the goal state are defined by the raw materials and the finished product respectively. In a geometry problem the current state is all that is known both general and specific. The goal state is also all that is known together with the problem set.

Slide 103:

Neelam Rawat, AI UNIT - II MEANS-ENDS-ANALYSIS AI is goal-based problem solving , a framework in which the solution of a problem can be described by finding a sequence of actions that lead to a desirable goal. The Means Ends Analysis (MEA) technique is a strategy to control search in problem-solving. Given a current state and a goal state, an action is chosen which will reduce the difference between the two. The action is performed on the current state to produce a new state, and the process is recursively applied to this new state and the goal state. Note that, in order for MEA to be effective, the goal-seeking system must have a means of associating to any kind of detectable difference those actions that are relevant to reducing that difference. It must also have means for detecting the progress it is making (the changes in the differences between the actual and the desired state), as some attempted sequences of actions may fail and, hence, some alternate sequences may be tried.

Slide 104:

Neelam Rawat, AI UNIT - II MEANS-ENDS-ANALYSIS - ALGORITHM To perform means-ends analysis, Step 1: Until the goal is reached or no more procedures are available, - Describe the current state, the goal state, and the difference between the two. - Use the difference between the current state and goal state, possibly with the description of the current state or goal state, to select a promising procedure. - Use the promising procedure and update the current state. Step 2: If the goal is reached, announce success; otherwise, announce failure.

Slide 105:

Neelam Rawat, AI UNIT - II MINI-MAX SEARCH Mini-max strategy is a simple look ahead strategy for two person game playing. Here, one player is called “maximizer” and the other called “minimizer. The plausible move generator generates the necessary states for further evaluation and the static evaluation function “ranks” each of the positions. Let A be the initial state of the game. The plausible move generates three children for that move and the static evaluation function generator assigns the values given along with each of the states. It is assumed that the static evaluation function generator returns a value from -20 to +20 where in a value +20 indicates a win for the maximizer and -20 win for the minimizer. A value 0 indicates a tie or draw.

Slide 106:

Neelam Rawat, AI UNIT - II It is also assumed that the maximizer makes the first move (not essential, as minimizer can also make first move). The maximizer, always tries to go a position where the static evaluation function value is the maximum positive value. A D C B -6 7 2 The maximizer being the player to make the first move, and will move to node D because the static evaluation function value for that is maximum. The same above figure shows that if the minimizer has to make the first move, it will go to node B because the static evaluation function value at that node will be advantageous to it. Also, game playing strategy never stops with one level but looks ahead, i.e., moves a couple of levels downwards to choose the optimal path. Sometimes by expanding these nodes and scanning them, one might be forced to retract or rethink.

Slide 107:

Neelam Rawat, AI UNIT - II Let take an example, M X Y Z W V U T S R Q O P N -6 +2 0 -4 -6 +2 +3 +5 +7 0 +4 +3 +2 Maximizers move

Slide 108:

Neelam Rawat, AI UNIT - II This search has just stopped with two levels only. However, it is possible to consider more levels for accurate results. It depends on the following factors:- Time left for moving The stage of the game Number of pieces one has etc ALGORITHM Step 1: Set FINAL_VALUE to be as minimum as possible Step 2: If limit of search has been reached, then FINAL_VALUE=GOOD_VALUE of the current position Step 3: else do Step 3.1: Generate successors of the position Step 3.2: Recursively call MINIMAX again with the present position with depth incremented by unity Step 3.3: Evaluate GOOD_VALUE Step 3.4: If GOOD_VALUE > FINAL_VALUE then FINAL_VALUE =GOOD_VALUE

Slide 109:

Neelam Rawat, AI UNIT - II α - β PRUNING - MODIFIES MINIMAX WITH ALAPHA-BETA CUT OFFS This is the modified version of minimax strategy wherein two threshold values are maintained for further expansion. One threshold value is alpha, which is lower bound on the value that the maximizer can be assigned and other is beta, which represents the upper bound on the value the minimizer can be assigned. To explain how α - β value help, consider the tree structure: P R Q V U T S >6 6 6 8 -2

Slide 110:

Neelam Rawat, AI UNIT - II α - β PRUNING - MODIFIES MINIMAX WITH ALAPHA-BETA CUT OFFS The maximizer has to play first followed by the minimizer, as is done in minimax. The maximizer assigns a value of 6 at Q ( minimum value at S & T). This value is passed back to P. So the maximizer is assured of a value of 6 when he moves to Q. Now lets see what’s happening at R. the value of U is -2 and at V is unknown. Since the move is a minimizing one, by moving to R. P can get only a value of -2 or less than that. This is totally unacceptable for P because by moving to Q, it is assured of a value of 6. Hence P will never try to examine V or other children of R. By simply pruning the tree whose root is V, one saves a lot of time in searching

Slide 111:

Neelam Rawat, AI UNIT - II To see the effect of alpha and beta, consider the figure given below: P R Q V U T S X W Z Y D C F E 6 3 6 8 3 9 4 6 8 7 1 3 9 5 Maximizer’s move Minimizer’s move Maximizer’s move Level 0 Level 1 Level 2 Level 3 Here, P is maximizing player. Before P can branch to Q or R, a look ahead search is done up to level 3 as shown above. The static evaluation function generator has assigned values which are given for the leaf nodes. Since S, T, U, V are also maximizers, the maximum of leaf nodes are assigned to them. Thus, S, T, U and V have values 6, 8, 3, 9 respectively. The predecessor level, i.e., nodes Q and R are minimizers. Thus Q takes minimum of 6 and 8; R takes minimum of 3 and 9. since P is maximizer, P will obviously opt for Q.

Slide 112:

Neelam Rawat, AI UNIT - II ROLE OF ALPHA: For P, the maximizer, a value of 6 is assured by moving to node Q. this value is compared with that of the value at R. P being a maximizer, would follow any path whose value is greater than 6. hence, this value of 6, being the least that a maximizing node can obtain a set as the value of alpha. This value of alpha is now used as a reference point. Any node whose value is greater than alpha is acceptable and all nodes whose value are less than alpha are rejected. By referring figure, one finds that the value at R is 3, which lower than the alpha cutoff. Hence the entire tree under R is totally rejected.

Slide 113:

Neelam Rawat, AI UNIT - II ROLE OF BETA:

Summary: Informed search:

Summary: Informed search Best-first search is general search where the minimum-cost nodes (according to some measure) are expanded first. Greedy search uses minimal estimated cost h ( n ) to the goal state as measure. This reduces the search time, but the algorithm is neither complete nor optimal. A* search combines uniform-cost search and greedy search: f ( n ) = g ( n ) + h ( n ). A* handles state repetitions and h ( n ) never overestimates. A* is complete and optimal, but space complexity is high. The time complexity depends on the quality of the heuristic function. IDA* and SMA* reduce the memory requirements of A*. Hill-climbing algorithms keep only a single state in memory, but can get stuck on local optima. Simulated annealing escapes local optima, and is complete and optimal given a “long enough” cooling schedule. Genetic algorithms can search a large space by modeling biological evolution. Online search algorithms are useful in state spaces with partial/no information. Neelam Rawat, AI UNIT - II

authorStream Live Help