Artificial Intelligence

Views:
 
Category: Education
     
 

Presentation Description

AI unit 1

Comments

Presentation Transcript

Slide1:

By Neelam Rawat Neelam Rawat, AI UNIT - I

Slide2:

Neelam Rawat, AI UNIT - I OUTLINE: What is AI? The foundations of AI A brief History of AI AI Applications Intelligent Agents Structure of Intelligent Agents Natural Language Processing (NLP)

Slide3:

Neelam Rawat, AI UNIT - I REFERENCES: ARTIFICIAL INTELLIGENCE BY Peter Norvig P(1-5) http://www.myreaders.info/html/artificial_intelligence.html

Slide4:

Neelam Rawat, AI UNIT - I What is ARTIFICIAL INTELLIGENCE? making computers that think? the automation of activities we associate with human thinking, like decision making, learning ... ? the art of creating machines that perform functions that require intelligence when performed by people ? the study of mental faculties through the use of computational models ?

Slide5:

Neelam Rawat, AI UNIT - I What is ARTIFICIAL INTELLIGENCE? According to Elaine Rich, “Artificial Intelligence “ “Artificial Intelligence is the study of how to make computers do things at which, at the moment, people are better”.

Slide6:

Neelam Rawat, AI UNIT - I In what way computer & Human Being are better?

Slide7:

Neelam Rawat, AI UNIT - I According to Avron Barr and Edward A. Feigenbaum, “Artificial Intelligence is the part of computer science with designing intelligent computer systems, that is, systems that exhibit the characteristics we associate with intelligence in human behavior.” Symbolic Processing According to Bruce Buchanan and Edward Shortliffe” Rule Based Expert Systems” , “Artificial Intelligence is that branch of computer science dealing with symbolic, non algorithmic methods of problem solving.” Heuristics According to Bruce Buchanan and Encyclopedic Britannica, heuristics as a key element of a Artificial Intelligence: “Artificial Intelligence is branch of computer science that deals with ways of representing knowledge using symbols rather than numbers and with rules-of-thumb or heuristics, methods for processing information.” Pattern Matching According to Brattle Research Corporation, “In simplified terms, Artificial Intelligence works with the pattern matching methods which attempts to describe objects, events, or processes in terms of their qualitative features and logical and computational relationships.”

Slide8:

Neelam Rawat, AI UNIT - I Systems that act rationally Systems that think like humans Systems that think rationally Systems that act like humans THOUGHT BEHAVIOUR HUMAN RATIONAL What is ARTIFICIAL INTELLIGENCE?

Slide9:

Neelam Rawat, AI UNIT - I Systems that act rationally Systems that think like humans Systems that think rationally Systems that act like humans THOUGHT BEHAVIOUR HUMAN RATIONAL What is ARTIFICIAL INTELLIGENCE?

Slide10:

Neelam Rawat, AI UNIT - I Systems that act like humans: TURING TEST Alan Turing in 1950s suggested a test based on indistinguishability from intelligent entities – human being. You enter a room which has a computer terminal. You have a fixed period of time to type what you want into the terminal, and study the replies. At the other end of the line is either a human being or a computer system. If it is a computer system, and at the end of the period you cannot reliably determine whether it is a system or a human, then the system is deemed to be intelligent.

Slide11:

Neelam Rawat, AI UNIT - I Systems that act like humans: TURING TEST “The art of creating machines that perform functions that require intelligence when performed by people.” (Kurzweil) “The study of how to make computers do things at which, at the moment, people are better.” (Rich and Knight)

Slide12:

Neelam Rawat, AI UNIT - I Systems that act like humans: TURING TEST

Slide13:

Neelam Rawat, AI UNIT - I Systems that act like humans: TURING TEST The Turing Test approach a human questioner cannot tell if -- there is a computer or a human answering his question, via teletype (remote communication) Intelligent behavior to achieve human-level performance in all cognitive tasks

Slide14:

Neelam Rawat, AI UNIT - I Systems that act like humans: TURING TEST These cognitive tasks include: Natural language processing -- for communication with human Knowledge representation -- to store information effectively & efficiently Automated reasoning -- to retrieve & answer questions using the stored information Machine learning -- to adapt to new circumstances

Slide15:

Neelam Rawat, AI UNIT - I TOTAL TURING TEST Includes two more issues: Computer vision to perceive objects (seeing) Robotics to move objects (acting)

Slide16:

Neelam Rawat, AI UNIT - I Systems that act rationally Systems that think like humans Systems that think rationally Systems that act like humans THOUGHT BEHAVIOUR HUMAN RATIONAL What is ARTIFICIAL INTELLIGENCE?

Slide17:

Neelam Rawat, AI UNIT - I Systems that think like humans: Cognitive modelling Humans as observed from ‘inside’ How do we know how humans think? Introspection vs. psychological experiments Cognitive Science “The exciting new effort to make computers think .... Machines with minds in the full and literal sense” (Haugeland) “[The automation of] activities that we associate with human thinking, activities such as decision-making, problem solving,learning …” (Bellman)

Slide18:

Neelam Rawat, AI UNIT - I Systems that act rationally Systems that think like humans Systems that think rationally Systems that act like humans THOUGHT BEHAVIOUR HUMAN RATIONAL What is ARTIFICIAL INTELLIGENCE?

Slide19:

Neelam Rawat, AI UNIT - I Systems that think rationally: "laws of thought” Humans are not always ‘rational’ Rational - defined in terms of logic? Logic can’t express everything (e.g. uncertainty) Logical approach is often not feasible in terms of computation time (needs ‘guidance’) “The study of mental facilities through the use of computational models” (Charniak and McDermott) “The study of the computations that make it possible to perceive, reason, and act” (Winston)

Slide20:

Neelam Rawat, AI UNIT - I Systems that act rationally Systems that think like humans Systems that think rationally Systems that act like humans THOUGHT BEHAVIOUR HUMAN RATIONAL What is ARTIFICIAL INTELLIGENCE?

Slide21:

Neelam Rawat, AI UNIT - I Systems that act rationally: “Rational agent” Rational behaviour: doing the right thing The right thing: that which is expected to maximize goal achievement, given the available information Giving answers to questions is ‘acting’.

Slide22:

Neelam Rawat, AI UNIT - I Systems that act rationally: “Rational agent” Logic, only part of a rational agent, not all of rationality Sometimes logic cannot reason a correct conclusion At that time, some specific (in domain) human knowledge or information is used Thus, it covers more generally different situations of problems Compensate the incorrectly reasoned conclusion

Slide23:

Neelam Rawat, AI UNIT - I FOUNDATION OF AI Philosophy Logic, methods of reasoning, mind as physical system, foundations of learning, language, rationality. Mathematics Formal representation and proof, algorithms, computation, (un)decidability, (in)tractability Probability/Statistics modeling uncertainty, learning from data Economics utility, decision theory, rational economic agents

Slide24:

Neelam Rawat, AI UNIT - I FOUNDATION OF AI Neuroscience neurons as information processing units. Psychology/ how do people behave, perceive, process cognitive Cognitive Science information, represent knowledge. Computer building fast computers engineering Control theory design systems that maximize an objective function over time Linguistics knowledge representation, grammars

Slide25:

Neelam Rawat, AI UNIT - I FOUNDATION OF AI Philosophy made AI conceivable by considering the ideas that the mind is in some ways like a machine, that it operates on knowledge encoded in some internal language, and that thought can be used to choose what actions to take. Mathematicians provided the tools to manipulate statements of logical certainty as well as uncertain, probabilistic statements. They also set the groundwork for understanding computation and reasoning about the algorithms. Economics formalized the problem of making decisions that maximize the expected outcome to the decision maker.

Slide26:

Neelam Rawat, AI UNIT - I FOUNDATION OF AI Psychologists adopted the idea that humans and animals can be considered information-processing machines. Linguists showed that language use fits into this model. Computer engineers provided the artifacts that make AI applications possible. AI programs tend to be large, and they could not work without the great advances in speed and memory that the computer industry has provided. Control theory deals with designing devices that act optimally on the basis of feedback from the environment. Initially, the mathematical tools of control theory were quite different from AI, but the fields are coming closer together.

Slide27:

Neelam Rawat, AI UNIT - I HISTORY OF AI REFERRED URL: http://peace.saumag.edu/faculty/kardas/courses/CS/AIWashingtonMcKeeNelson.ppt_files/

Slide28:

Neelam Rawat, AI UNIT - I THE BEGINNING OF AI Although the computer provided the technology necessary for AI, it was not until the early 1950's that the link between human intelligence and machines was really observed. Norbert Wiener was one of the first Americans to make observations on the principle of feedback theory. The most familiar example of feedback theory is the thermostat: It controls the temperature of an environment by gathering the actual temperature of the house, comparing it to the desired temperature, and responding by turning the heat up or down. What was so important about his research into feedback loops was that Wiener theorized that all intelligent behavior was the result of feedback mechanisms. Mechanisms that could possibly be simulated by machines. This discovery influenced much of early development of AI. HISTORY OF AI

Slide29:

Neelam Rawat, AI UNIT - I Alan Turing In 1950 Alan Turing published a landmark paper in which he speculated about the possibility of creating machines with true intelligence. He noted that "intelligence" is difficult to define and devised his famous Turing Test. If a machine could carry on a conversation (over a teletype) that was indistinguishable from a conversation with a human being, then the machine could be called "intelligent." This simplified version of the problem allowed Turing to argue convincingly that a "thinking machine" was at least plausible and the paper answered all the most common objections to the proposition. The Turing Test was the first serious proposal in the philosophy of artificial intelligence. HISTORY OF AI

Slide30:

Neelam Rawat, AI UNIT - I Allen Newell & Herbert Simon In late 1955, Newell and Simon developed The Logic Theorist , considered by many to be the first AI program. The program, representing each problem as a tree model, would attempt to solve it by selecting the branch that would most likely result in the correct conclusion. The impact that the logic theorist made on both the public and the field of AI has made it a crucial stepping stone in developing the AI field. HISTORY OF AI

Slide31:

Neelam Rawat, AI UNIT - I John McCarthy In 1956 John McCarthy regarded as the father of AI, organized a conference to draw the talent and expertise of others interested in machine intelligence for a month of brainstorming. He invited them to Vermont for "The Dartmouth summer research project on artificial intelligence." From that point on, because of McCarthy, the field would be known as Artificial intelligence. Although not a huge success, the Dartmouth conference did bring together the founders in AI, and served to lay the groundwork for the future of AI research. HISTORY OF AI

Slide32:

Neelam Rawat, AI UNIT - I Knowledge Expansion In the seven years after the conference, AI began to pick up momentum. Although the field was still undefined, ideas formed at the conference were re-examined, and built upon. Centers for AI research began forming at Carnegie Mellon and MIT, and new challenges were faced: further research was placed upon creating systems that could efficiently solve problems, by limiting the search, such as the Logic Theorist. And second, making systems that could learn by themselves. In 1957, the first version of a new program The General Problem Solver (GPS) was tested. The program developed by the same pair which developed the Logic Theorist. The GPS was an extension of Wiener's feedback principle, and was capable of solving a greater extent of common sense problems. HISTORY OF AI

Slide33:

Neelam Rawat, AI UNIT - I Knowledge Expansion (Cont.) A couple of years after the GPS, IBM contracted a team to research artificial intelligence. While more programs were being produced, McCarthy was busy developing a major breakthrough in AI history. In 1958 McCarthy announced his new development; the LISP language, which is still used today and is the language of choice among most AI developers. HISTORY OF AI

Slide34:

Neelam Rawat, AI UNIT - I From Lab to Life No longer was the computer technology just part of a select few researchers in laboratories. The personal computer made its debut along with many technological magazines. Such foundations as the American Association for Artificial Intelligence also started. There was also, with the demand for AI development, a push for researchers to join private companies. Other fields of AI also made there way into the marketplace during the 1980's. One in particular was the machine vision field. The work by Minsky and Marr were now the foundation for the cameras and computers on assembly lines, performing quality control. Although crude, these systems could distinguish differences shapes in objects using black and white differences. By 1985 over a hundred companies offered machine vision systems in the US, and sales totaled $80 million. HISTORY OF AI

Slide35:

Neelam Rawat, AI UNIT - I From Lab to Life (Cont.) The 1980's were not totally good for the AI industry. In 1986-87 the demand in AI systems decreased, and the industry lost almost a half of a billion dollars. Companies such as Teknowledge and Intellicorp together lost more than $6 million, about a third of there total earnings. Another disappointment was the so called "smart truck" financed by the Defense Advanced Research Projects Agency. The projects goal was to develop a robot that could perform many battlefield tasks. In 1989, due to project setbacks and unlikely success, the Pentagon cut funding for the project. Despite these discouraging events, AI slowly recovered. New technology in Japan was being developed. Fuzzy logic, first pioneered in the US has the unique ability to make decisions under uncertain conditions. Also neural networks were being reconsidered as possible ways of achieving Artificial Intelligence. The 1980's introduced to its place in the corporate marketplace, and showed the technology had real life uses, ensuring it would be a key in the 21st century. HISTORY OF AI

Slide36:

Neelam Rawat, AI UNIT - I A.I. Timeline HISTORY OF AI

Slide37:

Neelam Rawat, AI UNIT - I HISTORY OF AI 1943: early beginnings McCulloch & Pitts: Boolean circuit model of brain 1950: Turing Turing's "Computing Machinery and Intelligence“ 1956: birth of AI Dartmouth meeting: "Artificial Intelligence“ name adopted 1950s: initial promise Early AI programs, including Samuel's checkers program Newell & Simon's Logic Theorist 1955-65: “great enthusiasm” Newell and Simon: GPS, general problem solver Gelertner: Geometry Theorem Prover McCarthy: invention of LISP

Slide38:

Neelam Rawat, AI UNIT - I HISTORY OF AI 1966—73: Reality dawns Realization that many AI problems are intractable Limitations of existing neural network methods identified Neural network research almost disappears 1969—85: Adding domain knowledge Development of knowledge-based systems Success of rule-based expert systems, E.g., DENDRAL, MYCIN But were brittle and did not scale well in practice 1986-- Rise of machine learning Neural networks return to popularity Major advances in machine learning algorithms and applications 1990-- Role of uncertainty Bayesian networks as a knowledge representation framework 1995 till present-- AI as Science Integration of learning, reasoning, knowledge representation AI methods used in vision, language, data mining, etc

Slide39:

Neelam Rawat, AI UNIT - I APPLICATIONS OF AI Game playing Speech recognition Understanding natural language Computer vision Expert system

Slide40:

Neelam Rawat, AI UNIT - I Artificial intelligence APPLICATIONS OF AI

Slide41:

Neelam Rawat, AI UNIT - I APPLICATIONS OF AI Game Playing: Games are interactive computer programs, an emerging area in which the goals of human-level AI are pursued Speech Recognition: A process of converting a speech to a sequence of words. In 1990s, computer speech recognition reached a particular level for limited purposes Understanding Natural Language: Natural Language Processing (NLP) does automated generation and understanding of natural human languages Computer vision: It is the combination of concepts, techniques and ideas from digital image processing, pattern recognition, AI and computer graphics Expert system: it enable the system to dignose situations without human experience being present

Slide42:

Neelam Rawat, AI UNIT - I Next Discussions Intelligent Agents Structure of Intelligent Agents Natural Language Processing (NLP) STOP !!!

Slide43:

Neelam Rawat, AI UNIT - I INTELLIGENT AGENTS

Slide44:

Neelam Rawat, AI UNIT - I What is Intelligence? The ability to communicate The ability to retain knowledge The ability to solve problems

Slide45:

Neelam Rawat, AI UNIT - I INTELLIGENT BEHAVIOR Learn from experience Apply knowledge acquired from experience Handle complex situations Solve problems when important information is missing Determine what is important React quickly and correctly to a new situation Understand visual images Process and manipulate symbols Be creative and imaginative Use heuristics

Slide46:

Neelam Rawat, AI UNIT - I AGENTS An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators

Slide47:

Neelam Rawat, AI UNIT - I EXAMPLE/TYPES OF AGENT Human Agent: A human agent has eyes, ears and other organs for sensors and hands, legs , mouth and other body parts for actuators. Robotic Agent: A Robotic agent might have cameras and infrared range finders for sensors and various motors for actuators Software Agent: A software agent has encoded bit strings as its percepts and actions.

Slide48:

Neelam Rawat, AI UNIT - I AGENTS & ENVIRONMENT An agent perceives its environment through sensors -- the complete set of inputs at a given time is called a percept -- the current percept, or a sequence of percepts may influence the actions of an agent It can change the environment through actuators -- an operation involving an actuator is called an action -- actions can be grouped into action sequences

Slide49:

Neelam Rawat, AI UNIT - I AGENT & ITS ACTION/FUNCTION a rational agent does “the right thing” -- the action that leads to the best outcome under the given circumstances an agent function maps percept sequences to actions -- abstract mathematical description an agent program is a concrete implementation of the respective function -- it runs on a specific agent architecture (“platform”) problems: -- what is “ the right thing” -- how do you measure the “best outcome” Aim : find a way to implement the rational agent function concisely

Slide50:

Neelam Rawat, AI UNIT - I AGENT FUNCTION a = F(p) where p is the current percept, a is the action carried out, and F is the agent function F maps percepts to actions F: P  A where P is the set of all percepts, and A is the set of all actions In general, an action may depend on all percepts observed so far, not just the current percept, so .... a k = F(p 0 p 1 p 2 …p k ) where p 0 p 1 p 2 …p k is the sequence of percepts observed to date, a k is the resulting action carried out F now maps percept sequences to actions F: P*  A

Slide51:

Neelam Rawat, AI UNIT - I PERFORMANCE MEASURE OF AGENTS criteria for measuring the outcome and the expenses of the agent -- often subjective, but should be objective -- task dependent -- time may be important Performance measure embodies the criterion for success of an agent’s behavior when an agent plunked down in an environment, it generates a sequence of actions according to the percepts it receives. This sequence causes environment to go through a sequence of states. If the sequence is desirable, then agent has performed well. There is no mixed measure for all agents. Therefore, we will insist on an objective performance measure, typically one imposed by the designer who is constructing the agent.

Slide52:

Neelam Rawat, AI UNIT - I EXAMPLE – PERFORMANCE MEASURE Vacuum Agent number of tiles cleaned during a certain period -- based on the agent’s report, or validated by an objective authority -- doesn’t consider expenses of the agent, side effects energy, noise, loss of useful objects, damaged furniture, scratched floor -- might lead to unwanted activities agent re-cleans clean tiles, covers only part of the room, drops dirt on tiles to have more tiles to clean, etc.

Slide53:

Neelam Rawat, AI UNIT - I NATURE OF ENVIRONMENT A task environment specification includes the performance measure, the external environment, the actuators, and the sensors. In designing an agent, the first step must always be to specify task environment as fully as possible. For acronymically, we call PEAS (Performance, Environment, Actuators and Sensors) description

Slide54:

Neelam Rawat, AI UNIT - I PEAS – EXAMPLE Agent: automated taxi driver: Performance measure: Safe, fast, legal, comfortable trip, maximize profits Environment: Roads, other traffic, pedestrians, customers Actuators: Steering wheel, accelerator, brake, signal, horn Sensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard Agent: Medical diagnosis system Performance measure: Healthy patient, minimize costs,… Environment: Patient, hospital, staff Actuators: Screen display (questions, tests, diagnoses, treatments, referrals) Sensors: Keyboard (entry of symptoms, findings, patient's answers)

Slide55:

Neelam Rawat, AI UNIT - I PEAS – EXAMPLE Agent: Part-picking robot Performance measure: Percentage of parts in correct bins Environment: Conveyor belt with parts, bins Actuators: Jointed arm and hand Sensors: Camera, joint angle sensors Agent: Interactive English tutor Performance measure: Maximize student's score on test Environment: Set of students Actuators: Screen display (exercises, suggestions, corrections) Sensors: Keyboard

Slide56:

Neelam Rawat, AI UNIT - I PROPERTIES OF TASK ENVIRONMENT Task environment vary along several significant dimensions. These dimensions determine, to a large extent, the appropriate agent design and the applicability of each of the principle families of techniques for agent implementation. These dimensions are as follows: Fully or partially observable Deterministic or stochastic Episodic or sequential Static or dynamic Discrete or continuous Single-agent or multi-agent

Slide57:

Neelam Rawat, AI UNIT - I FULLY vs. PARTIALLY OBSERVABLE If an agent sensors give it access to the complete state of the environment at each point in time, then we say that the task environment is fully observable. A task environment is effectively fully observable if the sensors detect all aspects that are relevant to the choice of actions; relevance, in turn, depends on the performance measure. Fully observable environments are convenient because the agent need not maintain any internal state to keep track of the record. An environment might be partially observable because of noisy and inaccurate sensors or because parts of the state are simply missing from the sensor data – for example, a vaccum agent with only a local dirt sensor cannot tell whether there is dirt in other squares, and an automated taxi cannot see what other drivers are thinking

Slide58:

Neelam Rawat, AI UNIT - I DETERMINISTIC vs. STOCHASTIC If the next state of the environment is completely determined by the current state and the action executed by the agent, then we say the environment is deterministic; otherwise it is stochastic. In principle, agent need not to be worry about uncertainty in fully observable deterministic environment. If the environment is partially observable, however, then it could appear to be stochastic. It is often better to think of an environment as deterministic or stochastic from the point of view of the agent. Taxi driving is clearly stochastic in that sense because one can never predict the behavior of traffic exactly, moreover one tire blow out and one engine seizes up without warning. But the vaccum world described it deterministic, but variations can include stochastic elements such as randomly appearing dirt and an unreliable suction mechanism. If the environment is deterministic except for actions of other agents, we say that the environment is strategic

Slide59:

Neelam Rawat, AI UNIT - I EPISODIC vs. SEQUENTIAL In an episodic task environment, the agent’s experience is divided into atomic episodes. Each episode consists of the agent perceiving and then performing a single action. Crucially, the next episode does not depend on the actions taken in previous episodes. In the episode environment, the choice of action in each episode depends only on the episode itself. For example, an agent that has to spot defective parts of an assembly line base each decision does not affect whether the next part is defective. In sequential environment, the current decision could affect all future decisions. Chess and taxi driver are sequential in both cases, short term actions can have long term consequences. Episodic environment are much simpler than sequential environment because the agent does not need to think ahead.

Slide60:

Neelam Rawat, AI UNIT - I SINGLE AGENT vs. MULTI-AGENT An agent operating by itself in an environment is single agent • Examples: Crossword is a single agent while chess is two-agents • Question: Does an agent A have to treat an object B as an agent or can it be treated as a stochastically behaving object • Whether B's behavior is best described by as maximizing a performance measure whose value depends on agent's A behavior • Examples: chess is a competitive multi-agent environment while taxi driving is a partially cooperative multi-agent environment

Slide61:

Neelam Rawat, AI UNIT - I TASK ENVIRONMENT TYPES

Slide62:

Neelam Rawat, AI UNIT - I STRUCTURE OF AGENTS AI is to design the agent program that implements the agent function mapping percepts to action. We assume this program will run on some sort of computing device with physical sensors and actuators AGENT = ARCHITECTURE + PROGRAM The architecture makes the percepts from the sensors available to the program, runs the program, and feed the program’s action choices to the actuators as they are generated.

Slide63:

Neelam Rawat, AI UNIT - I AGENT PROGRAM Agent program have some skeleton, they take the current percept as input from the sensors and return an action to the actuators. Notice the agent program takes the current percept as input and the agent function takes the entire percept history. The agent program takes just the current percept as input because nothing more is available from the environment; if the agent actions depend on the entire percept sequence, the agent will have to remember the percepts.

Slide64:

Neelam Rawat, AI UNIT - I For example, following TABLE-DRIVEN-AGENT program is invoked for each new percept and returns an action each time. It keeps track of the percept sequence using its own private data structure. function TABLE-DRIVEN-AGENT (percept) returns and action static: percepts, a sequence, initially empty table, a table of actions, indexed by percept sequences, initially full specified append percept to the end of percepts action  LOOKUP (percept, table) return action Here, it shows rather a trivial agent program that keeps of the percept sequence and then uses it to the index into a table of actions to decide what to do. The table represent explicitly the agent function that the agent program embodies.

Slide65:

Neelam Rawat, AI UNIT - I KINDS OF AGENT PROGRAMS Simple index agents Model-based reflex agents Goal-based agents Utility-based agents Note: All these can be turned into learning agents

Slide66:

Neelam Rawat, AI UNIT - I SIMPLE REFLEX AGENT These agents which select actions on the basis of the current percept, ignoring the rest of the percept history. For example, vaccum agent is simple reflex agent. An agent program for vaccum agent is as follows: Note: Given program is specific to one particular vaccum environment. A more general and flexible approach is first to build a general-purpose interpreter for condition-action rules and then create rule sets for specific task environment

Slide67:

Neelam Rawat, AI UNIT - I Two location: A and B Percepts: location & contents e.g. [A, dirty] Actions: left, right, suck, no_op One simple function: if the current square is dirty then suck, otherwise move to other square

Slide68:

Neelam Rawat, AI UNIT - I STRUCTURE OF GENERAL PROGRAM (SIMPLE-REFLEX AGENT)

Slide69:

Neelam Rawat, AI UNIT - I STRUCTURE OF GENERAL PROGRAM INTERPRET_INPUT function generates an abstracted description of the current state from the percept RULE_MATCH function returns the first rule in the set of rules that matches the given state description

Slide70:

Neelam Rawat, AI UNIT - I SIMPLE REFLEX AGENT Simple-reflex agents are simple, but they turn out to be of very limited intelligence The agent will work only if the correct decision can be made on the basis of the current percept – that is only if the environment is fully observable Infinite loops are often unavoidable – escape could be possible by randomizing

Slide71:

Neelam Rawat, AI UNIT - I MODEL-BASED REFLEX AGENTS • The agent should keep track of the part of the world it can't see now • The agent should maintain some sort of internal state that depends on the percept history and reflects at least some of the unobserved aspects of the current state • Updating the internal state information as time goes by requires two kinds of knowledge to be encoded in the agent program – Information about how the world evolves independently of the agent – Information about how the agent's own actions affects the world • Model of the world – model based agents

Slide72:

Neelam Rawat, AI UNIT - I MODEL-BASED REFLEX AGENTS

Slide73:

Neelam Rawat, AI UNIT - I MODEL-BASED REFLEX AGENTS

Slide74:

Neelam Rawat, AI UNIT - I GOAL-BASED AGENT • Knowing about the current state of the environment is not always enough to decide what to do (e.g. decision at a road junction) • The agent needs some sort of goal information that describes situations that are desirable • The agent program can combine this with information about the results of possible actions in order to choose actions that achieve the goal • Usually requires search and planning

Slide75:

Neelam Rawat, AI UNIT - I GOAL-BASED AGENT

Slide76:

Neelam Rawat, AI UNIT - I GOAL_BASED vs. REFLEX-BASED AGENT • Although goal-based agents appears less efficient, it is more flexible because the knowledge that supports its decision is represented explicitly and can be modified • On the other hand, for the reflex-agent, we would have to rewrite many condition-action rules • The goal based agent's behavior can easily be changed • The reflex agent's rules must be changed for a new situation

Slide77:

Neelam Rawat, AI UNIT - I UTILITY-BASED AGENT • Goals alone are not really enough to generate high quality behavior in most environments – they just provide a binary distinction between happy and unhappy states • A more general performance measure should allow a comparison of different world states according to exactly how happy they would make the agent if they could be achieved • Happy – Utility (the quality of being useful) • A utility function maps a state onto a real number which describes the associated degree of happiness

Slide78:

Neelam Rawat, AI UNIT - I UTILITY-BASED AGENT

Slide79:

Neelam Rawat, AI UNIT - I LEARNING AGENTS • Turing – instead of actually programming intelligent machines by hand, which is too much work, build learning machines and then teach them • Learning also allows the agent to operate in initially unknown environments and to become more competent than its initial knowledge alone might allow

Slide80:

Neelam Rawat, AI UNIT - I LEARNING AGENTS

Slide81:

Neelam Rawat, AI UNIT - I LEARNING AGENTS • Learning element – responsible for making improvements • Performance element – responsible for selecting external actions (it is what we had defined as the entire agent before) • Learning element uses feedback from the critic on how the agent is doing and determines how the performance element should be modified to do better in the future • Problem generator is responsible for suggesting actions that will lead to a new and informative experiences

Slide82:

Neelam Rawat, AI UNIT - I NATURAL LANGUAGE PROCESSING Natural language gives computer users the ability to communication with the computer in their native language. TYPES OF NLP Natural Language Understanding -- Taking some spoken/typed sentence and working out what it means Natural Language Generation -- Taking some formal representation of what you want to say and working out a way to express it in a natural (human) language (e.g., English)

Slide83:

Neelam Rawat, AI UNIT - I Classify text into categories Index and search large texts Automatic translation Speech understanding Understand phone conversations 5. Information extraction Extract useful information from resumes 6. Automatic summarization Condense 1 book into 1 page 7. Question answering 8. Knowledge acquisition 9. Text generations / dialogues WHY NLP? Huge amounts of data -- Internet = at least 20 billions pages -- Intranet Applications for processing large amounts of texts -- require NLP expertise

Slide84:

Neelam Rawat, AI UNIT - I Artificial intelligence NATURAL LANGUAGE PROCESSING

Slide85:

Neelam Rawat, AI UNIT - I Computers Artificial Intelligence Algorithms Databases Networking Robotics Search Natural Language Processing Information Retrieval Machine Translation Language Analysis Semantics Parsing WHERE NLP EXISTS

Slide86:

Neelam Rawat, AI UNIT - I END OF UNIT I END OF UNIT I

authorStream Live Help