logging in or signing up lecture6 Diana Download Post to : URL : Related Presentations : Share Add to Flag Embed Email Send to Blogs and Networks Add to Channel Uploaded from authorPOINTLite Insert YouTube videos in PowerPont slides with aS Desktop Copy embed code: Embed: Flash iPad Dynamic Copy Does not support media & animations Automatically changes to Flash or non-Flash embed WordPress Embed Customize Embed URL: Copy Thumbnail: Copy The presentation is successfully added In Your Favorites. Views: 177 Category: Entertainment License: All Rights Reserved Like it (0) Dislike it (0) Added: February 07, 2008 This Presentation is Public Favorites: 0 Presentation Description No description available. Comments Posting comment... Premium member Presentation Transcript I256: Applied Natural Language Processing: I256: Applied Natural Language Processing Marti Hearst Sept 18, 2006 Why do puns make us groan?: Why do puns make us groan? He drove his expensive car into a tree and found out how the Mercedes bends. Isn't the Grand Canyon just gorges? Why do puns make us groan?: Why do puns make us groan? Time flies like an arrow. Fruit flies like a banana. Predicting Next Words: Predicting Next Words One reason puns make us groan is they play on our assumptions of what the next word will be. They also exploit homonymy – same sound, different spelling and meaning (bends, Benz; gorges, gorgeous) polysemy – same spelling, different meaning Review:ConditonalFreqDist() Data Structure: Review:ConditonalFreqDist() Data Structure CFD is a collection of FreqDist() objects Indexed by the “condition” that is being tested or compared Initialize a new one: cfd = ConditionalFreqDist() Add a count cfd[‘austen-emma’].inc(‘she’) cfd[‘austen-pride’].inc(‘she’) cfd[‘austen-pride’].inc(‘he’) Can access each FreqDist object by indexing on condition cfd[‘austen-emma’].samples() # [‘she’] cfd[‘austen-pride’].N() # 2 Can also get a list of the conditions from the cfd object cfd.conditions() >> [‘austen-emma’, ‘austen-pride’] Computing Next Words: Computing Next Words Computing Bigrams via Storing Adjacent Word Counts: Computing Bigrams via Storing Adjacent Word Counts cdf = ConditionalFreqDist() prev = None for word in sentence.split(“ “) cdf[prev].inc(word. lower()) prev = word.lower() sentence = “The dog ate the crab” prev = None, word = “the” prev = “the”, word = “dog” prev = “dog”, word = “ate” prev = “ate”, word = “the” prev = “the”, word = “crab” cdf[`the’].samples() = [‘dog’, ‘crab’] Auto-generate a Story: Auto-generate a Story How to fix this? Use a random number generator. Auto-generate a Story: Auto-generate a Story The choice() method chooses one item randomly from a list (from random import *) Applications: Adapted from slide by Bonnie Dorr Applications Why do we want to predict a word, given some preceding words? Rank the likelihood of sequences containing various alternative hypotheses e.g., for speech recognition Theatre owners say popcorn/unicorn sales have doubled... Assess the likelihood/goodness of a sentence for text generation or machine translation. The doctor recommended a cat scan. El doctor recommendó una exploración del gato. Python Tip: Lists can build Lists: Python Tip: Lists can build Lists Bigram counts: Bigram counts How to get the counts in a compact way from a CFD for all the ngrams starting with a given word? How to include the words themselves along with their counts? Comparing Modal Verb Counts: Comparing Modal Verb Counts How to implement this? Which modals best characterize each genre? Hint to get you started: Comparing Modals: Comparing Modals Comparing Modals: Comparing Modals Part-of-Speech Tagging: Part-of-Speech Tagging Terminology: Modified from Diane Litman's version of Steve Bird's notes Terminology Tagging The process of associating labels with each token in a text Tags The labels Tag Set The collection of tags used for a particular task Example from the GENIA corpus: Example from the GENIA corpus Typically a tagged text is a sequence of white-space separated base/tag tokens: These/DT findings/NNS should/MD be/VB useful/JJ for/IN therapeutic/JJ strategies/NNS and/CC the/DT development/NN of/IN immunosuppressants/NNS targeting/VBG the/DT CD28/NN costimulatory/NN pathway/NN ./. What does Tagging do?: Modified from Diane Litman's version of Steve Bird's notes What does Tagging do? Collapses Distinctions Lexical identity may be discarded e.g., all personal pronouns tagged with PRP Introduces Distinctions Ambiguities may be resolved e.g. deal tagged with NN or VB Helps in classification and prediction Significance of Parts of Speech: Modified from Diane Litman's version of Steve Bird's notes Significance of Parts of Speech A word’s POS tells us a lot about the word and its neighbors: Limits the range of meanings (deal), pronunciation (object vs object) or both (wind) Helps in stemming Limits the range of following words Can help select nouns from a document for summarization Basis for partial parsing (chunked parsing) Parsers can build trees directly on the POS tags instead of maintaining a lexicon Choosing a tagset: Slide modified from Massimo Poesio's Choosing a tagset The choice of tagset greatly affects the difficulty of the problem Need to strike a balance between Getting better information about context Make it possible for classifiers to do their job Some of the best-known Tagsets: Slide modified from Massimo Poesio's Some of the best-known Tagsets Brown corpus: 87 tags (more when tags are combined) Penn Treebank: 45 tags Lancaster UCREL C5 (used to tag the BNC): 61 tags Lancaster C7: 145 tags The Brown Corpus : Modified from Diane Litman's version of Steve Bird's notes The Brown Corpus An early digital corpus (1961) Francis and Kucera, Brown University Contents: 500 texts, each 2000 words long From American books, newspapers, magazines Representing genres: Science fiction, romance fiction, press reportage scientific writing, popular lore Penn Treebank: Modified from Diane Litman's version of Steve Bird's notes Penn Treebank First large syntactically annotated corpus 1 million words from Wall Street Journal Part-of-speech tags and syntax trees What are the most frequent Brown tags?: What are the most frequent Brown tags? How hard is POS tagging?: Slide modified from Massimo Poesio's How hard is POS tagging? In the Brown corpus, 12% of word types ambiguous 40% of word tokens ambiguous Tagging methods: Tagging methods Hand-coded Statistical taggers Brill (transformation-based) tagger nltk_lite tag package: nltk_lite tag package Type of taggers: tag.Default() tag.Regexp() tag.Lookup() tag.Affix() tag.Unigram() tag.Bigram() tag.Trigram() Actions: tag.tag() tag.tagsents() tag.untag() tag.train() tag.accuracy() tag.tag2tuple() tag.string2words() tag.string2tags() Hand-coded Tagger: Hand-coded Tagger Make up some regexp rules that make use of morphology Compare to Brown tags: Compare to Brown tags Tagging with lexical frequencies: Modified from Massio Poesio's lecture Tagging with lexical frequencies Secretariat/NNP is/VBZ expected/VBN to/TO race/VB tomorrow/NN People/NNS continue/VBP to/TO inquire/VB the/DT reason/NN for/IN the/DT race/NN for/IN outer/JJ space/NN Problem: assign a tag to race given its lexical frequency Solution: we choose the tag that has the greater P(race|VB) P(race|NN) Actual estimate from the Switchboard corpus: P(race|NN) = .00041 P(race|VB) = .00003 Next Time: Next Time N-gram taggers Training, testing, and determining accuracy The Brill tagger You do not have the permission to view this presentation. In order to view it, please contact the author of the presentation.