Intelligent Systems Unit I

Views:
 
Category: Education
     
 

Presentation Description

Intelligent Agents: Introduction. Agents and Environments, Good Behavior: The Concept of Rationality, The Nature of Environments, The Structure of Agents. Problem Formulation: Problem-Solving Agents, Example Problems, Searching for Solutions, Uninformed Search Strategies, Avoiding Repeated States, Searching with Partial Information.

Comments

Presentation Transcript

Introduction: Intelligent System :

Introduction: Intelligent System Reference: Artificial Intelligence by Stuart Russell and Peter Norvig

What is an Intelligent system?:

What is an Intelligent system? What is intelligence? Hard to define unless you list characteristics eg, Reasoning Learning/ Adaptivity A truly intelligent system adapts itself to deal with changes in problems. Machine intelligence has a computer follow problem solving processes something like that in humans. Intelligent systems display machine-level intelligence, reasoning, often learning, not necessarily self-adapting. 2

Intelligent systems in business:

Intelligent systems in business utilise one or more intelligence tools, usually to aid decision making Provides business intelligence to Increase productivity Examples of successful intelligent systems applications in business: Customer service (Customer Relations Modelling) Scheduling (eg Mine Operations) Data mining Financial market prediction Quality control Intelligent systems in business 3

PowerPoint Presentation:

Possess one or more of these: Capability to extract and store knowledge Human like reasoning process Learning from experience (or training) Finding solutions through processes similar to natural evolution Recent trend More sophisticated Interaction with the user through natural language understanding speech recognition and synthesis image analysis Most current intelligent systems are based on rule based expert systems methodologies belonging to soft computing Characteristics of intelligent systems 4

PowerPoint Presentation:

Also known as Computational Intelligence Unlike conventional computing, SC techniques can be tolerant of imprecise, incomplete or corrupt input data solve problems without explicit solution steps learn the solution through repeated observation and adaptation can handle information expressed in vague linguistic terms arrive at an acceptable solution through evolution The Soft Computing (SC) paradigm 5

PowerPoint Presentation:

What is AI? The foundations of AI A brief history of AI The state of the art Introductory problems Introduction 6

Artificial Intelligence (AI) :

Primary goal: Development of software aimed at enabling machines to solve Problems through human-like reasoning Turing Test. Artificial Intelligence (AI) 7

PowerPoint Presentation:

Intelligence : “ability to learn, understand and think”. AI is the study of how to make computers make things which at the moment people do better. Examples: Speech recognition, Smell, Face, Object, Inferencing, Learning new skills, Decision making. What is Artificial Intelligence (AI)? 8

PowerPoint Presentation:

Thinking humanly -> m/c with mind. cognitive science & GPS Thinking rationally Laws of thought -> Study of computation that makes it possible to reason, act. Acting humanly -> study to make comps make things better than people do. Acting rationally The rational agent approach -> concerned with intelligent behaviour in artifacts. What is Artificial Intelligence (AI)? 9

PowerPoint Presentation:

Alan Turing (1912-1954) “Computing Machinery and Intelligence” (1950) Human Interrogator Human AI System Imitation Game Acting Humanly: The Turing Test 10

Acting Humanly: The Turing Test:

The Turing Test, proposed by Alen turing in 1950. was designed to provide a satisfatory operational defination of intelligence. the computer would need to possess following capabilities, natural language processing : to enable it to communicate successfully knowledge representation: to store what it knows automated reasoning: to use the stored information to answer question and to draw new conclusion. machine learning : to adapt to new circumstances and detect patterns. computer vision: to perceive object and robotics: to manipulate object and move about . AI compose of above disciplines. Acting Humanly: The Turing Test 11

Thinking Humanly: Cognitive Modelling:

when we say that a given program thinks like a human, we must have some way to figure out how human thinks. Thinking Humanly: Cognitive Modelling 12

Thinking Humanly: Cognitive Modelling:

GPS was designed. Not content to have a program correctly solving a problem. More concerned with comparing its reasoning steps to traces of human solving the same problem. Requires testable theories of the workings of the human mind: cognitive science . Thinking Humanly: Cognitive Modelling 13

Thinking Rationally: Laws of Thought:

Logic and laws of thought deals with studies of ideal or rational thought process and inference. Formal logic provides a precise notation and rules for representing and reasoning with all kinds of things in the world. Obstacles: Informal knowledge representation. Computational complexity and resources. Thinking Rationally: Laws of Thought 14

Acting Rationally:

Acting so as to achieve one’s goals. Does not necessarily involve thinking. Advantages: More general than the “laws of thought” approach. More amenable to scientific development than human- based approaches. Acting Rationally 15

The Foundations of AI:

Philosophy (423 BC - present): - Logic, methods of reasoning. - Mind as a physical system. - Foundations of learning, language, and rationality. Mathematics (c.800 - present): - Formal representation and proof. - Algorithms, computation, decidability, tractability. - Probability. The Foundations of AI 16

The Foundations of AI:

Psychology (1879 - present): - Adaptation. - Phenomena of perception. - Experimental techniques. Linguistics (1957 - present): - Knowledge representation. - Grammar. 17 The Foundations of AI

Brief History of AI:

The gestation of AI (1943 - 1956): - 1943: McCulloch & Pitts: Boolean circuit model of brain. - 1950: Turing’s “Computing Machinery and Intelligence”. - 1956: McCarthy’s name “Artificial Intelligence” adopted. Early enthusiasm, great expectations (1952 - 1969): - Early successful AI programs: Samuel’s checkers, Newell & Simon’s Logic Theorist, Gelernter’s Geometry Theorem Prover. - Robinson’s complete algorithm for logical reasoning. 18 Brief History of AI

PowerPoint Presentation:

A dose of reality (1966 - 1974): - AI discovered computational complexity. - Neural network research almost disappeared after Minsky & Papert’s book in 1969. Knowledge-based systems (1969 - 1979): - 1969: DENDRAL by Buchanan et al.. - 1976: MYCIN by Shortliffle. - 1979: PROSPECTOR by Duda et al.. 19 Brief History of AI

Brief History of AI:

AI becomes an industry (1980 - 1988): - Expert systems industry booms. - 1981: Japan’s 10-year Fifth Generation project. The return of NNs and novel AI (1986 - present): - Mid 80’s: Back-propagation learning algorithm reinvented. - Expert systems industry busts. - 1988: Resurgence of probability. - 1988: Novel AI ( ALife , GAs, Soft Computing, …). - 1995: Agents everywhere. - 2003: Human-level AI back on the agenda. 20 Brief History of AI

Task Domain of AI:

Mundane Tasks: Perception Vision Speech Natural Languages Understanding Generation Translation Common sense reasoning Robot Control Formal Tasks Games : chess, checkers etc Mathematics: Geometry, logic, Proving properties of programs Expert Tasks: Engineering ( Design, Fault finding, Manufacturing planning) Scientific Analysis Medical Diagnosis Financial Analysis 21 Task Domain of AI

AI Technique:

Intelligence requires Knowledge Knowledge possesses less desirable properties such as: Voluminous Hard to characterize accurately Constantly changing Differs from data that can be used AI technique is a method that exploits knowledge that should be represented in such a way that: Knowledge captures generalization It can be understood by people who must provide it It can be easily modified to correct errors. It can be used in variety of situations 22 AI Technique

State of the Art:

Computer beats human in a chess game. Computer-human conversation using speech recognition. Expert system controls a spacecraft. Language translation for webpages. Home appliances use fuzzy logic. 23 State of the Art

PowerPoint Presentation:

Intelligent Agents

PowerPoint Presentation:

Intelligent Agent Agent : entity in a program or environment capable of generating action. An agent uses perception of the environment to make decisions about actions to take. The perception capability is usually called a sensor . The actions can depend on the most recent perception or on the entire history (percept sequence). 25

PowerPoint Presentation:

The agent function is a mathematical function that maps a sequence of perceptions into action. The function is implemented as the agent program . The part of the agent taking an action is called an actuator . environment  sensors  agent function  actuators  environment 26 Agent Function

PowerPoint Presentation:

Sensors Percept (Observations) Actuator Action Environment Environment Environment Environment Agent Agent Function 27

PowerPoint Presentation:

Rational Agent A rational agent is one that can take the right decision in every situation. Performance measure : a set of criteria/test bed for the success of the agent's behavior. The performance measures should be based on the desired effect of the agent on the environment. 28

PowerPoint Presentation:

Rationality The agent's rational behavior depends on: the performance measure that defines success the agent's knowledge of the environment the action that it is capable of performing the current sequence of perceptions. Definition : for every possible percept sequence, the agent is expected to take an action that will maximize its performance measure. 29

PowerPoint Presentation:

Agent Autonomy An agent is omniscient if it knows the actual outcome of its actions. Not possible in practice. An environment can sometimes be completely known in advance. Exploration : sometimes an agent must perform an action to gather information (to increase perception). Autonomy : the capacity to compensate for partial or incorrect prior knowledge (usually by learning). 30

PowerPoint Presentation:

Environment 31

PowerPoint Presentation:

Environment Observable -fully or partially : A fully observable environment needs less representation. Episodic or sequential: Sequential – future actions depend on the previous ones. Episodic – individual unrelated tasks for the agent to solve. Static – dynamic Discrete – continuous Single agent – multi agent Multiple agents can be competitive or cooperative. 32

PowerPoint Presentation:

33

PowerPoint Presentation:

More Definitions of Agents "An agent is a persistent software entity dedicated to a specific purpose. " (Smith, Cypher, and Spohrer 94 ) "Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy, and in so doing, employ some knowledge or representation of the user's goals or desires." (IBM) "Intelligent agents continuously perform three functions: perception of dynamic conditions in the environment; action to affect conditions in the environment; and reasoning to interpret perceptions, solve problems, draw inferences, and determine actions. "(Hayes-Roth 94) 34

PowerPoint Presentation:

Agent vs. Program Size – an agent is usually smaller than a program. Purpose – an agent has a specific purpose while programs are multi-functional. Persistence – an agent's life span is not entirely dependent on a user launching and quitting it. Autonomy – an agent doesn't need the user's input to function. 35

PowerPoint Presentation:

Simple Agents Table-driven agents : the function consists of a lookup table of actions to be taken for every possible state of the environment. If the environment has n variables, each with t possible states, then the table size is t n . Only works for a small number of possible states for the environment. Simple reflex agents : deciding on the action to take based only on the current perception and not on the history of perceptions. Based on the condition-action rule: (if (condition) action) Works if the environment is fully observable 36

PowerPoint Presentation:

Table Driven Agents table lookup for entire history 37

Table-driven agent:

function TABLE-DRIVEN-AGENT (percept) returns action static: percepts , a sequence, initially empty table , a table indexed by percept sequences, initially fully specified append percept to the end of percepts action  LOOKUP( percepts, table ) return action An agent based on a prespecified lookup table. It keeps track of percept sequence and just looks up the best action Table-driven agent Problems Huge number of possible percepts (consider an automated taxi with a camera as the sensor) => lookup table would be huge Takes long time to build the table Not adaptive to changes in the environment; requires entire table to be updated if changes occur 38

PowerPoint Presentation:

Simple Reflex Agent sensors What the world is like now What action I should do now Condition - action rules effectors Environment function SIMPLE-REFLEX-AGENT( percept ) returns action static: rules , a set of condition-action rules state  INTERPRET-INPUT ( percept ) rule  RULE-MATCH ( state,rules ) action  RULE-ACTION [ rule ] return action A simple reflex agent works by finding a rule whose condition matches the current situation (as defined by the percept) and then doing the action associated with that rule. 39

PowerPoint Presentation:

The agent program for a simple reflex agent in the two-state vacuum environment. function Reflex Vacuum Agent (location, status ) returns action if status = Dirty then return Suck else if location = A then return Right else if location = B then return Left 40

PowerPoint Presentation:

Model-Based Reflex Agents If the world is not fully observable, the agent must remember observations about the parts of the environment it cannot currently observe. This usually requires an internal representation of the world (or internal state). Since this representation is a model of the world, we call this model-based agent. 41

PowerPoint Presentation:

42

PowerPoint Presentation:

Goal-Driven Agents The agent has a purpose and The action to be taken depends on the current state and on what it tries to accomplish (the goal). In some cases the goal is easy to achieve. In others it involves planning , sifting through a search space for possible solutions, developing a strategy . 43

PowerPoint Presentation:

44

Utility-based agent …:

Utility-based agent … Utility based agents provides a more general agent framework. In case that the agent has multiple goals, this framework can accommodate different preferences for the different goals. Such systems are characterized by a utility function that maps a state or a sequence of states to a real valued utility. The agent acts so as to maximize expected utility 45

Utility-based agent:

Utility-based agent sensors What the world is like now What action I should do now Utility effectors Environment State How the world evolves What my actions do What it will be like if I do action A How happy I will be in such as a state 46

PowerPoint Presentation:

Learning Agents Agents capable of acquiring new competence through observations and actions. Components: learning element (modifies the performance element) performance element (selects actions) feedback element (critic) exploration element (problem generator). 47

PowerPoint Presentation:

Other Types of Agents Communicative agent – exchanging information with other agents to complete its task. Mobile agent – capable of moving from one machine to another one (or from one environment to another). Flexible agent – whose actions are not scripted. Character – an agent with conversation skills, personality, and even emotional state. 48

PowerPoint Presentation:

Agent Classification 49

PowerPoint Presentation:

Agent Example A file manager agent. Sensors: commands like ls, du, pwd. Actuators: commands like tar, gzip, cd, rm, cp, etc. Purpose: compress and archive files that have not been used in a while. Environment: fully observable (but partially observed), deterministic (strategic), episodic, dynamic, discrete. 50

Tic Tac Toe:

Three programs are presented : Series increase Their complexity Use of generalization Clarity of their knowledge Extensability of their approach Tic Tac Toe 51

Introductory Problem: Tic-Tac-Toe:

Introductory Problem: Tic- Tac -Toe X X o 52

Introductory Problem: Tic-Tac-Toe:

Program 1: Data Structures: Board: 9 element vector representing the board, with 1-9 for each square. An element contains the value 0 if it is blank, 1 if it is filled by X, or 2 if it is filled with a O Movetable: A large vector of 19,683 elements ( 3^9), each element is 9-element vector. Algorithm: 1. View the vector as a ternary number. Convert it to a decimal number. 2. Use the computed number as an index into Move-Table and access the vector stored there. 3. Set the new board to that vector. Introductory Problem: Tic- Tac -Toe 53

Introductory Problem: Tic-Tac-Toe:

Comments: This program is very efficient in time. 1. A lot of space to store the Move-Table. 2. A lot of work to specify all the entries in the Move-Table. 3. Difficult to extend. Introductory Problem: Tic- Tac -Toe 54

Introductory Problem: Tic-Tac-Toe:

Introductory Problem: Tic- Tac -Toe 1 2 3 4 5 6 7 8 9 55

Introductory Problem: Tic-Tac-Toe:

Program 2: Data Structure: A nine element vector representing the board. But instead of using 0,1 and 2 in each element, we store 2 for blank, 3 for X and 5 for O Functions: Make2: returns 5 if the center sqaure is blank. Else any other balnk sq Posswin(p): Returns 0 if the player p cannot win on his next move; otherwise it returns the number of the square that constitutes a winning move. If the product is 18 (3x3x2), then X can win. If the product is 50 ( 5x5x2) then O can win. Go(n): Makes a move in the square n Strategy: Turn = 1 Go(1) Turn = 2 If Board[5] is blank, Go(5), else Go(1) Turn = 3 If Board[9] is blank, Go(9), else Go(3) Turn = 4 If Posswin(X)  0 , then Go(Posswin(X)) ....... Introductory Problem: Tic- Tac -Toe 56

Introductory Problem: Tic-Tac-Toe:

Comments: 1. Not efficient in time, as it has to check several conditions before making each move. 2. Easier to understand the program’s strategy. 3. Hard to generalize. 57 Introductory Problem: Tic- Tac -Toe

Introductory Problem: Tic-Tac-Toe:

8 3 4 1 5 9 6 7 2 15 - ( 8 + 5 ) 58 Introductory Problem: Tic- Tac -Toe

Introductory Problem: Tic-Tac-Toe:

Comments: 1. Checking for a possible win is quicker. 2. Human finds the row-scan approach easier, while computer finds the number-counting approach more efficient. 59 Introductory Problem: Tic- Tac -Toe

Introductory Problem: Tic-Tac-Toe:

Program 3: 1. If it is a win, give it the highest rating. 2. Otherwise, consider all the moves the opponent could make next. Assume the opponent will make the move that is worst for us. Assign the rating of that move to the current node. 3. The best node is then the one with the highest rating. 60 Introductory Problem: Tic- Tac -Toe

Introductory Problem: Tic-Tac-Toe:

Comments: 1. Require much more time to consider all possible moves. 2. Could be extended to handle more complicated games. 61 Introductory Problem: Tic- Tac -Toe

Introductory Problem: Que & Ans:

“Mary went shopping for a new coat. She found a red one she really liked. When she got it home, she discovered that it went perfectly with her favourite dress”. Q1 : What did Mary go shopping for? Q2 : What did Mary find that she liked? Q3 : Did Mary buy anything? 62 Introductory Problem: Que & Ans

Introductory Problem: Question Answering:

Program 1: 1. Match predefined templates to questions to generate text patterns. 2. Match text patterns to input texts to get answers. “What did X Y” “What did Mary go shopping for?” “Mary go shopping for Z” Z = a new coat 63 Introductory Problem: Question Answering

Introductory Problem: Question Answering:

Program 2: Structured representation of sentences: Event2: Thing1: instance: Finding instance: Coat tense: Past colour: Red agent: Mary object: Thing 1 64 Introductory Problem: Question Answering

Introductory Problem: Question Answering:

Program 3: Background world knowledge: C finds M C leaves L C buys M C leaves L C takes M 65 Introductory Problem: Question Answering

Solving Problems by Searching:

Solving Problems by Searching

Outline:

Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms Outline 67

Introduction:

Goal-based agents can succeed by considering future actions and desirability of their outcomes. Problem solving agent is a goal-based agent that decides what to do by finding sequences of actions that lead to desirable states Introduction 68

Problem solving:

We want: To automatically solve a problem We need: A representation of the problem Algorithms that use some strategy to solve the problem defined in that representation Problem solving 69

Problem representation:

General: State space : a problem is divided into a set of resolution steps from the initial state to the goal state Reduction to sub-problems : a problem is arranged into a hierarchy of sub-problems Problem representation 70

States:

A problem is defined by its elements and their relations. In each instant of the resolution of a problem, those elements have specific descriptors (How to select them?) and relations. A state is a representation of those elements in a given moment. Two special states are defined: Initial state (starting point) Final state (goal state) States 71

State modification: successor function:

A successor function is needed to move between different states. A successor function is a description of possible actions, a set of operators. It is a transformation function on a state representation, which convert it into another state. The successor function defines a relation of accessibility among states. Representation of the successor function: Conditions of applicability Transformation function State modification: successor function 72

State space:

The state space is the set of all states reachable from the initial state. It forms a graph (or map) in which the nodes are states and the arcs between nodes are actions. A path in the state space is a sequence of states connected by a sequence of actions. The solution of the problem is part of the map formed by the state space. State space 73

Problem solution:

A solution in the state space is a path from the initial state to a goal state. Path/solution cost : function that assigns a numeric cost to each path, the cost of applying the operators to the states. Solution quality is measured by the path cost function, and an optimal solution has the lowest path cost among all solutions. Solutions: any, an optimal one, all. Cost is important depending on the problem and the type of solution sought. Problem solution 74

Problem description:

Components: State space (explicitly or implicitly defined) Initial state Goal state (or the conditions it has to fulfill) Available actions (operators to change state) Restrictions (e.g., cost) Elements of the domain which are relevant to the problem (e.g., incomplete knowledge of the starting point) Type of solution: Sequence of operators or goal state Any, an optimal one (cost definition needed), all Problem description 75

Problem solving agents:

Intelligent agents are supposed to maximize their performance measure. This can be simplified if the agent can adopt a goal and aim at satisfying it. Goal formulation, based on the current situation and the agent’s performance measure, is the first step in problem solving Goal is a set of states. The agent’s task is to find out which sequence of actions will get it to a goal state Problem formulation is the process of deciding what sorts of actions and states to consider, given a goal Problem solving agents 76

Contd..:

An agent with several immediate options of unknown value can decide what to do by first examining different possible sequences of actions that lead to states of known value, and then choosing the best sequence, Looking for such a sequence is called search, A search algorithm takes a problem as input and returns a solution in the form of action sequence, a solution is found the actions it recommends can be carried out – execution phase. Contd.. 77

Contd..:

“formulate, search, execute” design for the agent, After formulating a goal and a problem to solve the agent calls a search procedure to solve it, It then uses the solution to guide its actions, doing whatever the solution recommends as the next thing to do (typically the first action in the sequence), Then removing that step from the sequence, Once the solution has been executed, the agent will formulate a new goal. Contd.. 78

Problem-solving agents:

Problem-solving agents 79

Problem types:

Deterministic, fully observable  single-state problem Agent knows exactly which state it will be in; Non-observable  sensorless problem (conformant problem) Agent may have no idea where it is; Nondeterministic and/or partially observable  contingency problem percepts provide new information about current state Unknown state space  exploration problem Problem types 80

Example: Romania:

On holiday in Romania; currently in Arad. Flight leaves tomorrow from Bucharest Formulate goal : be in Bucharest Formulate problem : states : various cities actions : drive between cities Find solution : sequence of cities, e.g., Arad, Sibiu, Fagaras , Bucharest Example: Romania 81

Example: Romania:

Example: Romania 82

Well-defined problems and solutions:

A problem can be defined formally by four components Initial state that the agent starts in – e.g. In(Arad) A description of the possible actions available to the agent – Successor function – returns a set of <action, successor> pairs – e.g. {<Go(Sibiu),In(Sibiu)>, <Go(Timisoara),In(Timisoara)>, <Go(Zerind), In(Zerind)>} – Initial state and the successor function define the state space ( a graph in which the nodes are states and the arcs between nodes are actions). A path in state space is a sequence of states connected by a sequence of actions Goal test determines whether a given state is a goal state – e.g.{In(Bucharest)} Path cost function that assigns a numeric cost to each path. The cost of a path can be described as the some of the costs of the individual actions along the path – step cost – e.g. Time to go Bucharest Well-defined problems and solutions 83

Single-state problem formulation:

A problem is defined by four items: initial state e.g., “at Arad” actions or successor function S(x) = set of action–state pairs e.g., S(Arad) = { <Arad  Zerind , Zerind >, … } goal test , can be explicit , e.g., x = “at Bucharest” path cost (additive) e.g., sum of distances, number of actions executed, etc. step cost , assumed to be ≥ 0 A solution is a sequence of actions leading from the initial state to a goal state Single-state problem formulation 84

Example: The 8-puzzle:

states? actions? goal test? path cost? Example: The 8-puzzle 85

Example: The 8-puzzle:

states? locations of tiles actions? move blank left, right, up, down goal test? = goal state (given) path cost? 1 per move Example: The 8-puzzle 86

Example: A vacuum-cleaner world with just two locations:

States: Initial state: Successor function: Goal test: Path cost: Example: A vacuum-cleaner world with just two locations 87

Example: A vacuum-cleaner world with just two locations:

Example: A vacuum-cleaner world with just two locations 88

Example: A vacuum-cleaner world with just two locations:

States: The agent is in one of two locations, each of which might or might not contain dirt. Thus there are 2 x 2^2 = 8 possible world states. Initial state: Any state can be designated as the initial state. Successor function: This generates the legal states that result from trying the three actions (Left, Right, and Suck). The complete state space is shown in Previous Figure Goal test: This checks whether all the squares are clean. Path cost: Each step costs 1, so the path cost is the number of steps in the path. Example: A vacuum-cleaner world with just two locations 89

Example: 8-queens problem:

States: Initial state: Successor function: Goal test: Example: 8-queens problem 90

Example: 8-queens problem:

States: Any arrangement of 0 to 8 queens on the board is a state. Initial state: No queens on the board. Successor function: Add a queen to any empty Square. Goal test: 8 queens are on the board, none attacked. Example: 8-queens problem 91

Tree search algorithms:

Basic idea: offline, simulated exploration of state space by generating successors of already-explored states (a.k.a.~ expanding states) Tree search algorithms 92

Tree search example:

Tree search example 93

Tree search example:

Tree search example 94

Tree search example:

Tree search example 95

Search strategies:

A search strategy is defined by picking the order of node expansion Strategies are evaluated along the following dimensions: completeness : does it always find a solution if one exists? time complexity : How long does it take to find a solution? space complexity : How much memory is needed to perform the search? optimality : does it always find a least-cost solution? Time and space complexity are measured in terms of b: maximum branching factor of the search tree d: depth of the least-cost solution m : maximum depth of the state space (may be ∞ ) Search strategies 96

Uninformed search strategies:

Uninformed search strategies use only the information available in the problem definition Breadth-first search Uniform-cost search Depth-first search Depth-limited search Iterative deepening search Uninformed search strategies 97

Breadth-first search:

The root node is expanded first, then all the successors of the root node, and their successors and so on In general, all the nodes are expanded at a given depth in the search tree before any nodes at the next level are expanded Expand shallowest unexpanded node Implementation: – fringe is a FIFO queue, – the nodes that are visited first will be expanded first –All newly generated successors will be put at the end of the queue – Shallow nodes are expanded before deeper nodes Breadth-first search 98

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Initial state Goal state Fringe: A (FIFO) Successors: B,C,D Visited: Breadth-first Search The fringe is the data structure we use to store all of the nodes that have been generated 99

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: B,C,D (FIFO) Successors: E,F Visited: A Next node Breadth-first Search 100

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: C,D,E,F (FIFO) Successors: G,H Visited: A , B Next node Breadth-first Search 101

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: D,E,F,G,H (FIFO) Successors: I,J Visited: A , B , C Next node Breadth-first Search 102

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: E,F,G,H,I,J (FIFO) Successors: K,L Visited: A , B , C , D Next node Breadth-first Search 103

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: F,G,H,I,J,K,L (FIFO) Successors: M Visited: A , B , C , D , E Next node Breadth-first Search 104

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: G,H,I,J,K,L,M (FIFO) Successors: N Visited: A , B , C , D , E , F Next node Breadth-first Search 105

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: H,I,J,K,L,M,N (FIFO) Successors: O Visited: A , B , C , D , E , F , G Next node Breadth-first Search 106

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: I,J,K,L,M,N,O (FIFO) Successors: P,Q Visited: A , B , C , D , E , F , G , H Next node Breadth-first Search 107

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: J,K,L,M,N,O,P,Q (FIFO) Successors: R Next node Visited: A , B , C , D , E , F , G , H , I Breadth-first Search 108

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: K,L,M,N,O,P,Q,R (FIFO) Successors: S Next node Visited: A , B , C , D , E , F , G , H , I , J Breadth-first Search 109

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: L,M,N,O,P,Q,R,S (FIFO) Successors: T Next node Visited: A , B , C , D , E , F , G , H , I , J , K Breadth-first Search 110

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: M,N,O,P,Q,R,S,T (FIFO) Successors: Next node Visited: A , B , C , D , E , F , G , H , I , J , K , L Breadth-first Search 111

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: N,O,P,Q,R,S,T (FIFO) Successors: Next node Visited: A , B , C , D , E , F , G , H , I , J , K , L , M Goal state achieved Breadth-first Search 112

Breadth-first Search:

Algorithm BREADTH : Breadth first search in state space Let fringe be a list containing the initial state Loop if fringe is empty return failure Node  remove-first (fringe) if Node is a goal then return the path from initial state to Node else generate all successors of Node, and (merge the newly generated nodes into fringe) add generated nodes to the back of fringe End Loop Breadth-first Search 113

Properties of breadth-first search:

Complete? Yes (if b is finite) Time? 1+b+b 2 +b 3 +… + b d + b(b d -1 ) = O(b d+1 ) Space? O(b d+1 ) (keeps every node in memory) Optimal? Yes (if cost = 1 per step) Space is the bigger problem (more than time) Properties of breadth-first search 114

Uniform-cost search (UCS):

Uniform cost search is a search algorithm used to traverse, and find the shortest path in weighted trees and graphs. Uniform Cost Search or UCS begins at a root node and will continually expand nodes, taking the node with the smallest total cost from the root until it reaches the goal state. Uniform cost search doesn't care about how many steps a path has, only the total cost of the path. UCS with all path costs equal to one is identical to breadth first search. Uniform-cost search (UCS) 115

PowerPoint Presentation:

Uniform Cost Search S A B C D 1 5 15 5 5 10 Similar to BFS except that it sorts (ascending order) the nodes in the fringe according to the cost of the node. where cost is the path cost . 116

PowerPoint Presentation:

S A B C D 1 5 15 5 5 10 Uniform Cost Search Fringe = [S 0 ] Next Node=Head of Fringe=S, S is not goal Successor(S)={C,B,A}=expand(S) but sort them according to path cost. Updated Fringe=[A 1 ,B 5 ,C 15 ] Queue Queue 117

PowerPoint Presentation:

S A B C 1 5 15 D 5 5 10 Uniform Cost Search Fringe = [A 1 ,B 5 ,C 15 ] Next Node=Head of Fringe=A, A is not goal Successor(A)={D}=expand(A) Sort the queue according to path cost. Updated Fringe=[B 5 ,D 11 ,C 15 ] 118

PowerPoint Presentation:

D S A B C 1 5 15 5 5 10 Uniform Cost Search Fringe = [B 5 ,D 11 ,C 15 ] Next Node=Head of Fringe=B, B is not goal Successor(B)={D}=expand(B) Sort the queue according to path cost. Updated Fringe=[D 10 ,D 11 ,C 15 ] 119

PowerPoint Presentation:

Uniform Cost Search Fringe = [ D 10 ,D 11 ,C 15 ] Next Node=Head of Fringe=D, D is a GOAL (cost 10 = 5+5) S BD D S A B C 1 5 15 5 5 10 Always finds the cheapest solution 120

Uniform Cost Search:

Uniform Cost Search A B C D E F G H 121

Uniform Cost Search:

Uniform Cost Search A B C D E H F G 122

Uniform Cost Search:

Uniform Cost Search 123

Uniform-cost search:

Expand least-cost unexpanded node Implementation : fringe = queue ordered by path cost Equivalent to breadth-first if step costs all equal Complete? Yes, if step cost ≥ ε Time? # of nodes with g ≤ cost of optimal solution, O(b 1 + C*/ ε ) where C * is the cost of the optimal solution Space? # of nodes with g ≤ cost of optimal solution, O(b 1 + C*/ ε ) Optimal? Yes – nodes expanded in increasing order of g(n) Uniform-cost search 124

Depth First Search - Method:

Expand Root Node First Explore one branch of the tree before exploring another branch If a leaf node do not represent a goal state, search backtracks up to the next highest node that has an unexplored path Depth First Search - Method 125

DFS:

DFS is an uninformed search that starts from root node of the search tree and going deeper and deeper until a goal node is found, or until it hits a node that has no children. Then the search backtracks , returning to the most recent node it hasn't finished exploring. DFS 126

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Initial state Goal state Fringe: A (LIFO) Successors: B,C,D Visited: Depth-First Search 127

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: B,C,D (LIFO) Successors: E,F Visited: A Depth-First Search 128

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: E,F,C,D (LIFO) Successors: K,L Visited: A , B Depth-First Search 129

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: K,L,F,C,D (LIFO) Successors: S Visited: A , B , E Depth-First Search 130

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: S,L,F,C,D (LIFO) Successors: Visited: A , B , E , K Depth-First Search 131

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: L,F,C,D (LIFO) Successors: T Backtracking Visited: A , B , E , K , S Depth-First Search 132

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: T,F,C,D (LIFO) Successors: Visited: A , B , E , K , S , L Depth-First Search 133

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: F,C,D (LIFO) Successors: M Depth-First Search Backtracking Visited: A , B , E , K , S , L , T 134

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: M,C,D (LIFO) Successors: Visited: A , B , E , K , S , L , T , F Depth-First Search 135

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: C,D (LIFO) Successors: G,H Backtracking Visited: A , B , E , K , S , L , T , F , M Depth-First Search 136

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: G,H,D (LIFO) Successors: N Visited: A , B , E , K , S , L , T , F , M , C Depth-First Search 137

PowerPoint Presentation:

A B C D E F G H I J K L M N O P Q R S T U Fringe: N,H,D (LIFO) Successors: Finished search U Goal state achieved Visited: A , B , E , K , S , L , T , F , M , C , G Depth-First Search 138

PowerPoint Presentation:

Depth First Search Let fringe be a list containing the initial state Loop if fringe is empty return failure Node  remove-first (fringe) if Node is a goal then return the path from initial state to Node else generate all successors of Node, and merge the newly generated nodes into fringe add generated nodes to the front of fringe End Loop Depth-First Search 139

Properties of depth-first search:

Complete? No: fails in infinite-depth spaces, spaces with loops Modify to avoid repeated states along path  complete in finite spaces Time? O(b d ) : terrible if m is much larger than d Space? O(bm), Optimal? No Properties of depth-first search 140

Depth limited search:

Like Depth first search, but the search is limited to a predefined depth . The depth of each state is recorded as it is generated. When picking the next state to expand, only those with depth less or equal than the current depth are expanded. Once all the nodes of a given depth are explored, the current depth is incremented. Depth limited search 141

Depth-limited search:

= depth-first search with depth limit l , i.e., nodes at depth l have no successors Recursive implementation : Depth-limited search Determine the vertex where the search should start and assign the maximum search depth Check if the current vertex is the goal state If not: Do nothing If yes: return Check if the current vertex is within the maximum search depth If not: Do nothing If yes: Expand the vertex and save all of its successors in a  stack Call DLS recursively for all vertices of the stack and go back to Step 2 142

Iterative deepening search:

Iterative deepening search until solution found do DFS with depth cutoff c c = c+1 143

Iterative deepening search l =0:

Iterative deepening search l =0 144

Iterative deepening search l =1:

Iterative deepening search l =1 145

Iterative deepening search l =2:

Iterative deepening search l =2 146

Iterative deepening search l =3:

Iterative deepening search l =3 147

Iterative deepening search:

Iterative deepening search 148

Properties of iterative deepening search:

Complete? Yes Time? (d+1)b 0 + d b 1 + (d-1)b 2 + … + b d = O(b d ) Space? O(bd) Optimal? Yes, if step cost = 1 Properties of iterative deepening search 149

Summary of algorithms:

Summary of algorithms 150

authorStream Live Help