1,555 research outputs found
AMR Dependency Parsing with a Typed Semantic Algebra
We present a semantic parser for Abstract Meaning Representations which
learns to parse strings into tree representations of the compositional
structure of an AMR graph. This allows us to use standard neural techniques for
supertagging and dependency tree parsing, constrained by a linguistically
principled type system. We present two approximative decoding algorithms, which
achieve state-of-the-art accuracy and outperform strong baselines.Comment: This paper will be presented at ACL 2018 (see
https://acl2018.org/programme/papers/
A Controllable Model of Grounded Response Generation
Current end-to-end neural conversation models inherently lack the flexibility
to impose semantic control in the response generation process, often resulting
in uninteresting responses. Attempts to boost informativeness alone come at the
expense of factual accuracy, as attested by pretrained language models'
propensity to "hallucinate" facts. While this may be mitigated by access to
background knowledge, there is scant guarantee of relevance and informativeness
in generated responses. We propose a framework that we call controllable
grounded response generation (CGRG), in which lexical control phrases are
either provided by a user or automatically extracted by a control phrase
predictor from dialogue context and grounding knowledge. Quantitative and
qualitative results show that, using this framework, a transformer based model
with a novel inductive attention mechanism, trained on a conversation-like
Reddit dataset, outperforms strong generation baselines.Comment: AAAI 202
Semantic Graph Parsing with Recurrent Neural Network DAG Grammars
Semantic parses are directed acyclic graphs (DAGs), so semantic parsing
should be modeled as graph prediction. But predicting graphs presents difficult
technical challenges, so it is simpler and more common to predict the
linearized graphs found in semantic parsing datasets using well-understood
sequence models. The cost of this simplicity is that the predicted strings may
not be well-formed graphs. We present recurrent neural network DAG grammars, a
graph-aware sequence model that ensures only well-formed graphs while
sidestepping many difficulties in graph prediction. We test our model on the
Parallel Meaning Bank---a multilingual semantic graphbank. Our approach yields
competitive results in English and establishes the first results for German,
Italian and Dutch.Comment: 9 pages, to appear in EMNLP201
Language classification from bilingual word embedding graphs
We study the role of the second language in bilingual word embeddings in
monolingual semantic evaluation tasks. We find strongly and weakly positive
correlations between down-stream task performance and second language
similarity to the target language. Additionally, we show how bilingual word
embeddings can be employed for the task of semantic language classification and
that joint semantic spaces vary in meaningful ways across second languages. Our
results support the hypothesis that semantic language similarity is influenced
by both structural similarity as well as geography/contact.Comment: To be published at Coling 201
- …