scal ofer

Views:
 
Category: Entertainment
     
 

Presentation Description

No description available.

Comments

Presentation Transcript

Algorithms for Generating Referring Expressions:: 

Algorithms for Generating Referring Expressions: Do They Do What People Do?

Referring Expressions – what is it?: 

Referring Expressions – what is it? - expression which describe an entity in such a way as to distinguish it from other entities in the context

People might relate to the same thing with different expressions…: 

People might relate to the same thing with different expressions…

Why do we need this?: 

Why do we need this? - Big request for Natural Language Generation (NLG) in dialog system (telephone-based systems, robot robot systems, in-car systems..). - Generation of RE is important task in NLG.

The algorithms:: 

The algorithms: - Takes as input one target object. - Set of distractors (D). - Set of properties (P). - Output: referring expression which distinguish between the given object to the set of distractors.

Purpose of the research : 

Purpose of the research - Determine if algorithms for generating RE return expression that human will output. - Made by R.Dale & J.Viethen. - 3 different algorithms.

The data base: 

The data base - Human-produced referring expressions - filing cabinets of 16 drawers

The data base: 

The data base - 20 participants - total of 140 referring expressions - 22 descriptions which were ambiguous or referred to a set of drawers was removed from the data.

The Full Brevity algorithm : 

The Full Brevity algorithm - a generation system should generate the shortest possible referring expression (dale, 89;92). - first checks if any one-component referring expression is successful, then checks if any two-component referring expression is successful, and so on.

The Full Brevity algorithm : 

The Full Brevity algorithm

The Full Brevity algorithm : 

The Full Brevity algorithm - Check Success: - check whether the description we have constructed so far is successful. - Choose Property: - choose the most useful property that will contribute to the description. - Extend Description: - adding the chosen property to the description; updating the set of distractors and the properties set.

Reiter’s referring expression generation rules: 

Reiter’s referring expression generation rules - Rule #1: No Unnecessary Components all components of a referring expression must be necessary “the small black dog” “the black dog”

Reiter’s referring expression generation rules: 

Reiter’s referring expression generation rules - Rule #2: Local Brevity it should not be possible to produce a shorter referring expression by replacing a set of existing components by a single new component. “the sleeping female dog” “the small dog”

Reiter’s referring expression generation rules: 

Reiter’s referring expression generation rules - Rule #3: Lexical Preference basic-level and other lexically-preferred words should be used whenever possible.

What Do People Do? : 

What Do People Do? - according to psychological literature on the human generation of referring expressions: - Human speakers in many cases include unnecessary modifiers in the referring expressions they construct. - Human speakers can begin to utter a referring expression before they have finished scanning the set of distractors.

Incremental algorithm: 

Incremental algorithm - simpler and faster than the full brevity algorithm. - makes unnecessary redundancies (it’s ok, since people might do the same). - iterates over the properties. Each time a property rules out some distractors, this property being added to the result.

Incremental algorithm: 

Incremental algorithm - Assumptions about the Knowledge Base: - an attribute-value pair for each property. example: <colour, red> - Every entity has as one of its attributes some type. example: <type, dog> - The knowledge base may organize some attribute values in a subsumption taxonomy example: animal subsumes dog, red subsumes scarlet.

Incremental algorithm: 

Incremental algorithm - following functions should be provided: - MoreSpecificValue(object, attribute, value) returns a new value for attribute where that value is more specific than value. If no more specic value is available, the function returns nil. example: value= dog, might return chihuahua.

Incremental algorithm: 

Incremental algorithm - BasicLevelValue(object,attribute) returns the basic-level value of an attribute of an object, from the point of view of the current user.

Incremental algorithm: 

Incremental algorithm - UserKnows(object, attribute-value-pair) - user can determine that the attribute-value pair applies to the object => true; user can determine that it does not apply to the object => false; else => return “unknown” example: the user can distinguish between cats and dogs, but can’t distinguish between different breeds of dogs. Object is chihuahua, then UserKnows(x, <type, dog>) = true; UserKnows(x, <type, cat>) = false; UserKnows(x, <type, Chihuahua >) and UserKnows(x, <type, Poodle >) will return unknown.

Incremental algorithm: 

Incremental algorithm - PreferredAttributes: global variable; contains the attributes that human speakers and hearers prefer, listed in order of preference, with the most preferred attribute first; will be determined by empirical investigation.

Incremental algorithm: 

Incremental algorithm - algorithm input: - a symbol corresponding to the referent (r). - a list of symbols corresponding to the members of the contrast set (C). - a list of properties from them the output will be constructed (P).

Incremental algorithm: 

Incremental algorithm

Incremental algorithm: 

Incremental algorithm - MakeReferringExpression the top level function. Returns a list of attribute-value pairs that specify a referring expression for the intended referent. A value for “type” is always included.

Incremental algorithm: 

Incremental algorithm - FindBestValue takes an attribute and an initial value; Returns a value for that attribute that is subsumed by the initial value. - RulesOut lives up to his name.

Relational algorithm : 

Relational algorithm - expansion of the Full brevity algorithm - generate RE with relations (next to, right to, etc). - generate expressions which longer than human-produced expressions. - Graph-based algorithms can fix that.

Relational algorithm: 

Relational algorithm

Relational algorithm: 

Relational algorithm - Check Success: - check whether the description we have constructed so far is successful. - Choose Property: - choose the most useful fact that will contribute to the description. - Extend Description: - extend the description with a constraint representing this fact; add Describe goals for any constants relating to the constraint.

To conclude : 

To conclude - Coverage of the Human Data: - Full brevity algorithm covered 82 of 103 (79.6%) of non-relational RE. - Incremental algorithm covered 98 of 103 (95.1%) of non-relational RE. Together covers all non-relational RE. - Relational algorithm didn’t cover even one RE!

coverage of redundancy: 

coverage of redundancy - Total of 29 redundant descriptions - Full brevity covered 9 of them. - Incremental algorithm covered 24.

Conclusions and FutureWork: 

Conclusions and FutureWork - only three specific algorithms were used in this research. - other algorithms in the literature take these as a base, and so are unlikely to deliver significantly different results. - Graph-based algorithms, and algorithm for sets can improve the coverage.

Conclusions and FutureWork: 

Conclusions and FutureWork - Future algorithms should be less deterministic, so they will be able to generate different REs for the same object (like people tend to do). - how we choose to represent the domain has an impact on what the algorithms will do. we would like our representations to mirror those used by humans.

Th-th-th… : 

Th-th-th…

authorStream Live Help