25 research outputs found
Reasoning About Pragmatics with Neural Listeners and Speakers
We present a model for pragmatically describing scenes, in which contrastive
behavior results from a combination of inference-driven pragmatics and learned
semantics. Like previous learned approaches to language generation, our model
uses a simple feature-driven architecture (here a pair of neural "listener" and
"speaker" models) to ground language in the world. Like inference-driven
approaches to pragmatics, our model actively reasons about listener behavior
when selecting utterances. For training, our approach requires only ordinary
captions, annotated _without_ demonstration of the pragmatic behavior the model
ultimately exhibits. In human evaluations on a referring expression game, our
approach succeeds 81% of the time, compared to a 69% success rate using
existing techniques
Unified Pragmatic Models for Generating and Following Instructions
We show that explicit pragmatic inference aids in correctly generating and
following natural language instructions for complex, sequential tasks. Our
pragmatics-enabled models reason about why speakers produce certain
instructions, and about how listeners will react upon hearing them. Like
previous pragmatic models, we use learned base listener and speaker models to
build a pragmatic speaker that uses the base listener to simulate the
interpretation of candidate descriptions, and a pragmatic listener that reasons
counterfactually about alternative descriptions. We extend these models to
tasks with sequential structure. Evaluation of language generation and
interpretation shows that pragmatic inference improves state-of-the-art
listener models (at correctly interpreting human instructions) and speaker
models (at producing instructions correctly interpreted by humans) in diverse
settings.Comment: NAACL 2018, camera-ready versio