Unit IV AIR Natural Language Processing and ANN

Views:
 
Category: Education
     
 

Presentation Description

Natural Language Processing: Introduction, Stages in natural language Processing, Application of NLP in Machine Translation, Information Retrieval and Big Data Information Retrieval. Learning: Supervised, Unsupervised and Reinforcement learning. Artificial Neural Networks (ANNs): Concept, Feed forward and Feedback ANNs, Error Back Propagation, Boltzmann Machine.

Comments

Presentation Transcript

slide 1:

BE Computer Engineering Unit IV Natural Language Processing and ANN

slide 2:

Outline  Natural Language Processing:  Introduction Stages in natural language Processing  Learning:  Artificial Neural Networks ANNs: Concept

slide 3:

Stages of language processing  Phonetics and phonology  Morphology  Lexical Analysis  Syntactic Analysis  Semantic Analysis  Pragmatics  Discourse

slide 4:

Natural Language Processing NLP  Natural Language Processing NLP refers to AI method of communicating with an intelligent systems using a natural language such as English.  Processing of Natural Language is required when you want an intelligent system like robot to perform as per your instructions when you want to hear decision from a dialogue based clinical expert system etc.  The field of NLP involves making computers to perform useful tasks with the natural languages humans use. The input and output of an NLP system can be − Speech OR Written Text

slide 5:

Components of NLP  There are two components of NLP as given:  Natural Language Understanding NLU: Understanding involves the following tasks −  Mapping the given input in natural language into useful representations.  Analyzing different aspects of the language.  NaturalLanguageGenerationNLG  It is the process of producing meaningful phrases and sentences in the form of natural language from some internal representation. It involves  Textplanning − It includes retrieving the relevant content from knowledge base.  Sentence planning − It includes choosing required words forming meaningful phrases setting tone of the sentence.  TextRealization − It is mapping sentence plan into sentence structure.  The NLU is harder than NLG.

slide 6:

Difficulties in NLU  NL has an extremely rich form and structure.  It is very ambiguous. There can be different levels of ambiguity −  Lexical ambiguity − It is at very primitive level such as word-level.  For example treating the word “ b o ard ” as noun or verb  Syntax Level ambiguity − A sentence can be parsed in different ways.  For example “ H e lifted the beetle with red cap. ” − Did he use cap to lift the beetle or he lifted a beetle that had red cap  Referential ambiguity − Referring to something using pronouns.  For example Rima went to Gauri. She said “I am tired. ” − Exactly who is tired  One input can mean different meanings.  Many inputs can mean the same thing.

slide 7:

NLP Terminology  Phonology: This is needed only if the computer is required to understand spoken language. Phonology is the study of the sounds that make up words and is used to identify words from sounds.  Morphology: This is the first stage of analysis that is applied to words once they have been identified from speech or input into the system. Morphology looks at the ways in which words break down into components and how that affects their grammatical status. For example the letter “ s” on the end of a word can often either indicate that it is a plural noun or a third-person present-tense verb.  Syntax: This stage involves applying the rules of the grammar from the language being used. Syntax determines the role of each word in a sentence and thus enables a computer system to convert sentences into a structure that can be more easily manipulated.

slide 8:

NLP Terminology  Semantics: This involves the examination of the meaning of words and sentences.As we will see it is possible for a sentence to be syntactically correct but to be semantically meaningless. Conversely it is desirable that a computer system be able to understand sentences with incorrect syntax but that still convey useful information semantically.  Pragmatics: This is the application of human-like understanding to sentences and discourse to determine meanings that are not immediately clear from the semantics.  For example if someone says “ Can you tell me the t i m e ” most people know that “ y es ” is not a suitable answer. Pragmatics enables a computer system to give a sensible answer to questions like this.

slide 9:

NLP Terminology  Morphological Analysis: In studying the English language morphology is relatively simple.  We have endings such as -ing -s and -ed which are applied to verbs endings such as -s and -es which are applied to nouns  we also have the ending -ly which usually indicates that a word is an adverb.  We also have prefixes such as anti- non- un- and in- which tend to indicate negation or opposition.  We also have a number of other prefixes and suffixes that provide a variety of semantic and syntactic information.

slide 10:

Steps in NLP  Lexical Analysis − It involves identifying and analyzing the structure of words. Lexicon of a language means the collection of words and phrases in a language. Lexical analysis is dividing the whole chunk of txt into paragraphs sentences and words.  Syntactic Analysis Parsing − It involves analysis of words in the sentence for grammar and arranging words in a manner that shows the relationship among the words.  Semantic Analysis − It draws the exact meaning or the dictionary meaning from the text. The text is checked for meaningfulness. It is done by mapping syntactic structures and objects in the task domain. Rejects - hot ice-cream  Discourse Integration − The meaning of any sentence depends upon the meaning of the sentence just before it.  Pragmatic Analysis − During this what was said is re-interpreted on what it actually meant. It involves deriving those aspects of language which require real world knowledge.

slide 11:

Parsing: Syntactic Analysis  The process in which we convert a sentence into a tree that represents the sente nce ’ s syntactic structure is known as parsing.  Parsing a sentence tells us whether it is a valid sentence as defined by our grammar.  If a sentence is not a valid sentence then it cannot be parsed.  Parsing a sentence involves producing a tree  The black cat crossed the road.  This tree shows how the sentence is made up of a noun phrase and a verb phrase. The noun phrase consists of an article an adjective and a noun. The verb phrase consists of a verb and a further noun phrase which in turn consists of an article and a noun

slide 12:

Parsing: Syntactic Analysis  Building a parse tree from the top down involves starting from a sentence and determining which of the possible rewrites for Sentence can be applied to the sentence that is being parsed. Hence in this case Sentence would be rewritten using the following rule:  Sentence →NounPhrase VerbPhrase  Then the verb phrase and noun phrase would be broken down recursively in the same way until only terminal symbols were left.  When a parse tree is built from the top down it is known as aderivationtree.

slide 13:

Syntactic Analysis

slide 14:

Semantic Analysis  Having determined the syntactic structure of a sentence the next task of natural language processing is to determine the meaning of the sentence.  Semantic analysis involves building up a representation of the objects and actions that a sentence is describing including details provided by adjectives adverbs and prepositions.

slide 15:

Learning: Supervised Unsupervised and Reinforcement learning

slide 16:

Machine Learning • Machine learning is a branch of science that deals with programming the systems in such a way that they automatically learn and improve with experience. • Here learning means recognizing and understanding the input data and making wise decisions based on the supplied data. • It is very difficult to cater to all the decisions based on all possible inputs. • To tackle this problem algorithms are developed. These algorithms build knowledge from specific data and past experience with the principles of statistics probability theory logic combinatorial optimization search reinforcement learning and control theory.

slide 17:

Supervised Learning • Supervised learning deals with learning a function from available training data. A supervised learning algorithm analyzes the training data and produces an inferred function which can be used for mapping new examples. Common examples of supervised learning include: • classifying e-mails as spam • labeling webpages based on their content and • voice recognition. • Each of these components can be learned from appropriate feedback. Consider for example an agent training to become a taxi driver. • Every time the instructor shouts "Brake" the agent can learn a condition-action rule for when to brake component 1. • By seeing many camera images it is told that contain buses it can learn to recognize them 2. • By trying actions and observing the results-for example braking hard on a wet road-it can learn the effects of its actions 3. • Then when it receives no tip from passengers who have been thoroughly shaken up during the trip it can learn a useful component of its overall utility function 4.

slide 18:

Supervised Learning • The problem of supervised learning involves learning a function from examples of its inputs and outputs. Cases I 2 and 3 are all instances of supervised learning problems. • In I the agent learns condition-action rule for braking-this is a function from states to a Boolean output to brake or not to brake • In 2 the agent learns a function from images to a Boolean output whether the image contains a bus. • In 3 the theory of braking is a function from states and braking actions to say stopping distance in feet. • Notice that in cases 1 and 2 a teacher provided the correct output value of the examples in the third the output value was available directly from the agents percepts. For fully observable environments it will always be the case that an agent can observe the effects of its actions and hence can use supervised learning methods to learn to predict them. • For partially observable environments the problem is more difficult because the immediate effects might be invisible.

slide 19:

Unsupervised Learning • Unsupervised learning makes sense of unlabeled data without having any predefined dataset for its training. Unsupervised learning is an extremely powerful tool for analyzing available data and look for patterns and trends. It is most commonly used for clustering similar input into logical groups. Common approaches to unsupervised learning include: • k-means • self-organizing maps and • hierarchical clustering

slide 20:

Unsupervised Learning • The problem of unsupervised learning involves learning patterns in the input when no specific output values are supplied. • For example a taxi agent might gradually develop a concept of "good traffic days" and "bad traffic days" without ever being given labelled examples of each. • A purely unsupervised learning agent cannot learn what to do because it has no information as to what constitutes a correct action or a desirable state. • We will study unsupervised learning primarily in the context of probabilistic reasoning systems.

slide 21:

Reinforcement learning • The problem of reinforcement learning is the most general of the three categories. • Rather than being told what to do by a teacher a reinforcement learning agent must learn from reinforcement. • For example the lack of a tip at the end of the journey gives the agent some indication that its behavior is undesirable. • Reinforcement learning typically includes the subproblem of learning how the environment works.

slide 22:

Artificial Neural Networks ANNs

slide 23:

Artificial Neural Networks ANNs • A neuron is a cell in the brain whose principal function is the collection processing and dissemination of electrical signals. • The brains information-processing capacity is thought to emerge primarily from networks of such neurons. • SomeFacts- • Human brain contains ≈ 10 11 neurons • Each neuron is connected to ≈ 10 4 others • Some scientists compared the brain with a ‘c om plex nonlinear parallel com pu te r ’. • Evidence suggests that neurons receive analyze and transmit information. • The information in transmitted in a form of electro-chemical signals pulses. • When a neuron sends the information we say that a neuron ‘f ires’.

slide 24:

Artificial Neural Networks ANNs ExcitationandInhibition • The receptors of a neuron are called synapses and they are located on many branches called dendrites. • There are many types of synapses but roughly they can be divided into two classes: • Excitatory • a signal received at this synapse ‘en cou r ag es’ the neuron to Fire • Inhibitory • a signal received at this synapse inhibits the neuron as if asking to ‘sh ut up ’ • The neuron analyses all the signals received at its synapses. If most of them are ‘e ncouragi ng’ then the neuron gets ‘e xcite d’ and fires its own message along a single wire called axon. • The axon may have branches to reach many other neurons.

slide 25:

Neuron Model Natural neurons

slide 26:

Neuron Model • Neuron collects signals from dendrites • Sends out spikes of electrical activity through an axon which splits into thousands of branches. • At end of each brand a synapses converts activity into either exciting or inhibiting activity of a dendrite at another neuron. • Neuron fires when exciting activity surpasses inhibitory activity • Learning changes the effectiveness of the synapses

slide 27:

Neuron Model Abstract neuron model:

slide 28:

ANN Forward Propagation

slide 29:

Neuron Model http://www-cse.uta.edu/cook/ai1/lectures/figures/neuron.jpg • Receives n-inputs • Multiplies each input by its weight • Applies activation function to the sum of results • Outputs result • Controls when unit is “a ctive” or “inac tive” • Threshold function outputs 1 when input is positive and 0 otherwise

slide 30:

Network structures • There are two main categories of neural network structures: • acyclic or feed-forward networks and cyclic or recurrent networks. • Afeed-forward network represents a function of its current input thus it has no internal state other than the weights themselves. • A recurrent network on the other hand feeds its outputs back into1 its own inputs. • A network with all the inputs connected directly to the outputs is called a single-layer neural network or a perceptron network. • Feed-forward networks are usually arranged in layers such that each unit receives input only from units in the immediately preceding layer.

authorStream Live Help