595 research outputs found
Neural Semantic Parsing over Multiple Knowledge-bases
A fundamental challenge in developing semantic parsers is the paucity of
strong supervision in the form of language utterances annotated with logical
form. In this paper, we propose to exploit structural regularities in language
in different domains, and train semantic parsers over multiple knowledge-bases
(KBs), while sharing information across datasets. We find that we can
substantially improve parsing accuracy by training a single
sequence-to-sequence model over multiple KBs, when providing an encoding of the
domain at decoding time. Our model achieves state-of-the-art performance on the
Overnight dataset (containing eight domains), improves performance over a
single KB baseline from 75.6% to 79.6%, while obtaining a 7x reduction in the
number of model parameters.Comment: Accepted to ACL 201
Learning a Neural Semantic Parser from User Feedback
We present an approach to rapidly and easily build natural language
interfaces to databases for new domains, whose performance improves over time
based on user feedback, and requires minimal intervention. To achieve this, we
adapt neural sequence models to map utterances directly to SQL with its full
expressivity, bypassing any intermediate meaning representations. These models
are immediately deployed online to solicit feedback from real users to flag
incorrect queries. Finally, the popularity of SQL facilitates gathering
annotations for incorrect predictions using the crowd, which is directly used
to improve our models. This complete feedback loop, without intermediate
representations or database specific engineering, opens up new ways of building
high quality semantic parsers. Experiments suggest that this approach can be
deployed quickly for any new target domain, as we show by learning a semantic
parser for an online academic database from scratch.Comment: Accepted at ACL 201
QuaRel: A Dataset and Models for Answering Questions about Qualitative Relationships
Many natural language questions require recognizing and reasoning with
qualitative relationships (e.g., in science, economics, and medicine), but are
challenging to answer with corpus-based methods. Qualitative modeling provides
tools that support such reasoning, but the semantic parsing task of mapping
questions into those models has formidable challenges. We present QuaRel, a
dataset of diverse story questions involving qualitative relationships that
characterize these challenges, and techniques that begin to address them. The
dataset has 2771 questions relating 19 different types of quantities. For
example, "Jenny observes that the robot vacuum cleaner moves slower on the
living room carpet than on the bedroom carpet. Which carpet has more friction?"
We contribute (1) a simple and flexible conceptual framework for representing
these kinds of questions; (2) the QuaRel dataset, including logical forms,
exemplifying the parsing challenges; and (3) two novel models for this task,
built as extensions of type-constrained semantic parsing. The first of these
models (called QuaSP+) significantly outperforms off-the-shelf tools on QuaRel.
The second (QuaSP+Zero) demonstrates zero-shot capability, i.e., the ability to
handle new qualitative relationships without requiring additional training
data, something not possible with previous models. This work thus makes inroads
into answering complex, qualitative questions that require reasoning, and
scaling to new relationships at low cost. The dataset and models are available
at http://data.allenai.org/quarel.Comment: 9 pages, AAAI 201
- …