25 research outputs found
Recommended from our members
Graph-to-Graph Meaning Representation Transformations for Human-Robot Dialogue
In support of two-way human-robot communication, we leverage Abstract Meaning Representation (AMR) to capture the core semantic content of natural language search and navigation instructions. In order to effectively map AMR to a constrained robot action specification, we develop a set of in-domain, task-specific AMR graphs augmented with speech act and tense and aspect information not found in the original AMR. This paper presents our efforts and results in transforming AMR graphs into our in-domain graphs by employing both rule-based and classifier-based methods, thereby bridging the gap from entirely unconstrained natural language input to a fixed set of robot actions
Recommended from our members
Abstract Meaning Representation for Human-Robot Dialogue
In this research, we begin to tackle the
challenge of natural language understanding
(NLU) in the context of the development of
a robot dialogue system. We explore the adequacy
of Abstract Meaning Representation
(AMR) as a conduit for NLU. First, we consider
the feasibility of using existing AMR
parsers for automatically creating meaning
representations for robot-directed transcribed
speech data. We evaluate the quality of output
of two parsers on this data against a manually
annotated gold-standard data set. Second,
we evaluate the semantic coverage and distinctions
made in AMR overall: how well does it
capture the meaning and distinctions needed
in our collaborative human-robot dialogue domain?
We find that AMR has gaps that align
with linguistic information critical for effective
human-robot collaboration in search and
navigation tasks, and we present task-specific
modifications to AMR to address the deficiencies
Navigating to Success in Multi-Modal Human-Robot Collaboration: Analysis and Corpus Release
Human-guided robotic exploration is a useful approach to gathering
information at remote locations, especially those that might be too risky,
inhospitable, or inaccessible for humans. Maintaining common ground between the
remotely-located partners is a challenge, one that can be facilitated by
multi-modal communication. In this paper, we explore how participants utilized
multiple modalities to investigate a remote location with the help of a robotic
partner. Participants issued spoken natural language instructions and received
from the robot: text-based feedback, continuous 2D LIDAR mapping, and
upon-request static photographs. We noticed that different strategies were
adopted in terms of use of the modalities, and hypothesize that these
differences may be correlated with success at several exploration sub-tasks. We
found that requesting photos may have improved the identification and counting
of some key entities (doorways in particular) and that this strategy did not
hinder the amount of overall area exploration. Future work with larger samples
may reveal the effects of more nuanced photo and dialogue strategies, which can
inform the training of robotic agents. Additionally, we announce the release of
our unique multi-modal corpus of human-robot communication in an exploration
context: SCOUT, the Situated Corpus on Understanding Transactions.Comment: 7 pages, 3 figure
What Else Do I Need to Know? The Effect of Background Information on Users' Reliance on QA Systems
NLP systems have shown impressive performance at answering questions by
retrieving relevant context. However, with the increasingly large models, it is
impossible and often undesirable to constrain models' knowledge or reasoning to
only the retrieved context. This leads to a mismatch between the information
that the models access to derive the answer and the information that is
available to the user to assess the model predicted answer. In this work, we
study how users interact with QA systems in the absence of sufficient
information to assess their predictions. Further, we ask whether adding the
requisite background helps mitigate users' over-reliance on predictions. Our
study reveals that users rely on model predictions even in the absence of
sufficient information needed to assess the model's correctness. Providing the
relevant background, however, helps users better catch model errors, reducing
over-reliance on incorrect predictions. On the flip side, background
information also increases users' confidence in their accurate as well as
inaccurate judgments. Our work highlights that supporting users' verification
of QA predictions is an important, yet challenging, problem
Abstract Meaning Representation for Sembanking
We describe Abstract Meaning Representation (AMR), a semantic representation language in which we are writing down the meanings of thousands of English sentences. We hope that a sembank of simple, whole-sentence semantic structures will spur new work in statistical natural language understanding and generation, like the Penn Treebank encouraged work on statistical parsing. This paper gives an overview of AMR and tools associated with it