86 research outputs found
Recommended from our members
Abstract Meaning Representation for Human-Robot Dialogue
In this research, we begin to tackle the
challenge of natural language understanding
(NLU) in the context of the development of
a robot dialogue system. We explore the adequacy
of Abstract Meaning Representation
(AMR) as a conduit for NLU. First, we consider
the feasibility of using existing AMR
parsers for automatically creating meaning
representations for robot-directed transcribed
speech data. We evaluate the quality of output
of two parsers on this data against a manually
annotated gold-standard data set. Second,
we evaluate the semantic coverage and distinctions
made in AMR overall: how well does it
capture the meaning and distinctions needed
in our collaborative human-robot dialogue domain?
We find that AMR has gaps that align
with linguistic information critical for effective
human-robot collaboration in search and
navigation tasks, and we present task-specific
modifications to AMR to address the deficiencies
Language Model Pre-Training with Sparse Latent Typing
Modern large-scale Pre-trained Language Models (PLMs) have achieved
tremendous success on a wide range of downstream tasks. However, most of the LM
pre-training objectives only focus on text reconstruction, but have not sought
to learn latent-level interpretable representations of sentences. In this
paper, we manage to push the language models to obtain a deeper understanding
of sentences by proposing a new pre-training objective, Sparse Latent Typing,
which enables the model to sparsely extract sentence-level keywords with
diverse latent types. Experimental results show that our model is able to learn
interpretable latent type categories in a self-supervised manner without using
any external knowledge. Besides, the language model pre-trained with such an
objective also significantly improves Information Extraction related downstream
tasks in both supervised and few-shot settings. Our code is publicly available
at: https://github.com/renll/SparseLT.Comment: EMNLP 2022 (Oral
What Else Do I Need to Know? The Effect of Background Information on Users' Reliance on QA Systems
NLP systems have shown impressive performance at answering questions by
retrieving relevant context. However, with the increasingly large models, it is
impossible and often undesirable to constrain models' knowledge or reasoning to
only the retrieved context. This leads to a mismatch between the information
that the models access to derive the answer and the information that is
available to the user to assess the model predicted answer. In this work, we
study how users interact with QA systems in the absence of sufficient
information to assess their predictions. Further, we ask whether adding the
requisite background helps mitigate users' over-reliance on predictions. Our
study reveals that users rely on model predictions even in the absence of
sufficient information needed to assess the model's correctness. Providing the
relevant background, however, helps users better catch model errors, reducing
over-reliance on incorrect predictions. On the flip side, background
information also increases users' confidence in their accurate as well as
inaccurate judgments. Our work highlights that supporting users' verification
of QA predictions is an important, yet challenging, problem
- …