24 research outputs found
Exploiting Rich Syntactic Information for Semantic Parsing with Graph-to-Sequence Model
Existing neural semantic parsers mainly utilize a sequence encoder, i.e., a
sequential LSTM, to extract word order features while neglecting other valuable
syntactic information such as dependency graph or constituent trees. In this
paper, we first propose to use the \textit{syntactic graph} to represent three
types of syntactic information, i.e., word order, dependency and constituency
features. We further employ a graph-to-sequence model to encode the syntactic
graph and decode a logical form. Experimental results on benchmark datasets
show that our model is comparable to the state-of-the-art on Jobs640, ATIS and
Geo880. Experimental results on adversarial examples demonstrate the robustness
of the model is also improved by encoding more syntactic information.Comment: EMNLP'1
Transfer Learning for Neural Semantic Parsing
The goal of semantic parsing is to map natural language to a machine
interpretable meaning representation language (MRL). One of the constraints
that limits full exploration of deep learning technologies for semantic parsing
is the lack of sufficient annotation training data. In this paper, we propose
using sequence-to-sequence in a multi-task setup for semantic parsing with a
focus on transfer learning. We explore three multi-task architectures for
sequence-to-sequence modeling and compare their performance with an
independently trained model. Our experiments show that the multi-task setup
aids transfer learning from an auxiliary task with large labeled data to a
target task with smaller labeled data. We see absolute accuracy gains ranging
from 1.0% to 4.4% in our in- house data set, and we also see good gains ranging
from 2.5% to 7.0% on the ATIS semantic parsing tasks with syntactic and
semantic auxiliary tasks.Comment: Accepted for ACL Repl4NLP 201
Abstract Syntax Networks for Code Generation and Semantic Parsing
Tasks like code generation and semantic parsing require mapping unstructured
(or partially structured) inputs to well-formed, executable outputs. We
introduce abstract syntax networks, a modeling framework for these problems.
The outputs are represented as abstract syntax trees (ASTs) and constructed by
a decoder with a dynamically-determined modular structure paralleling the
structure of the output tree. On the benchmark Hearthstone dataset for code
generation, our model obtains 79.2 BLEU and 22.7% exact match accuracy,
compared to previous state-of-the-art values of 67.1 and 6.1%. Furthermore, we
perform competitively on the Atis, Jobs, and Geo semantic parsing datasets with
no task-specific engineering.Comment: ACL 2017. MR and MS contributed equall
Semantic Parsing with Dual Learning
Semantic parsing converts natural language queries into structured logical
forms. The paucity of annotated training samples is a fundamental challenge in
this field. In this work, we develop a semantic parsing framework with the dual
learning algorithm, which enables a semantic parser to make full use of data
(labeled and even unlabeled) through a dual-learning game. This game between a
primal model (semantic parsing) and a dual model (logical form to query) forces
them to regularize each other, and can achieve feedback signals from some
prior-knowledge. By utilizing the prior-knowledge of logical form structures,
we propose a novel reward signal at the surface and semantic levels which tends
to generate complete and reasonable logical forms. Experimental results show
that our approach achieves new state-of-the-art performance on ATIS dataset and
gets competitive performance on Overnight dataset.Comment: Accepted by ACL 2019 Long Pape