8,425 research outputs found
Second-Order Semantic Dependency Parsing with End-to-End Neural Networks
Semantic dependency parsing aims to identify semantic relationships between
words in a sentence that form a graph. In this paper, we propose a second-order
semantic dependency parser, which takes into consideration not only individual
dependency edges but also interactions between pairs of edges. We show that
second-order parsing can be approximated using mean field (MF) variational
inference or loopy belief propagation (LBP). We can unfold both algorithms as
recurrent layers of a neural network and therefore can train the parser in an
end-to-end manner. Our experiments show that our approach achieves
state-of-the-art performance
Robust Incremental Neural Semantic Graph Parsing
Parsing sentences to linguistically-expressive semantic representations is a
key goal of Natural Language Processing. Yet statistical parsing has focused
almost exclusively on bilexical dependencies or domain-specific logical forms.
We propose a neural encoder-decoder transition-based parser which is the first
full-coverage semantic graph parser for Minimal Recursion Semantics (MRS). The
model architecture uses stack-based embedding features, predicting graphs
jointly with unlexicalized predicates and their token alignments. Our parser is
more accurate than attention-based baselines on MRS, and on an additional
Abstract Meaning Representation (AMR) benchmark, and GPU batch processing makes
it an order of magnitude faster than a high-precision grammar-based parser.
Further, the 86.69% Smatch score of our MRS parser is higher than the
upper-bound on AMR parsing, making MRS an attractive choice as a semantic
representation.Comment: 12 pages; ACL 201
Scene Parsing via Dense Recurrent Neural Networks with Attentional Selection
Recurrent neural networks (RNNs) have shown the ability to improve scene
parsing through capturing long-range dependencies among image units. In this
paper, we propose dense RNNs for scene labeling by exploring various long-range
semantic dependencies among image units. Different from existing RNN based
approaches, our dense RNNs are able to capture richer contextual dependencies
for each image unit by enabling immediate connections between each pair of
image units, which significantly enhances their discriminative power. Besides,
to select relevant dependencies and meanwhile to restrain irrelevant ones for
each unit from dense connections, we introduce an attention model into dense
RNNs. The attention model allows automatically assigning more importance to
helpful dependencies while less weight to unconcerned dependencies. Integrating
with convolutional neural networks (CNNs), we develop an end-to-end scene
labeling system. Extensive experiments on three large-scale benchmarks
demonstrate that the proposed approach can improve the baselines by large
margins and outperform other state-of-the-art algorithms.Comment: 10 pages. arXiv admin note: substantial text overlap with
arXiv:1801.0683
Frame-Semantic Parsing with Softmax-Margin Segmental RNNs and a Syntactic Scaffold
We present a new, efficient frame-semantic parser that labels semantic
arguments to FrameNet predicates. Built using an extension to the segmental RNN
that emphasizes recall, our basic system achieves competitive performance
without any calls to a syntactic parser. We then introduce a method that uses
phrase-syntactic annotations from the Penn Treebank during training only,
through a multitask objective; no parsing is required at training or test time.
This "syntactic scaffold" offers a cheaper alternative to traditional syntactic
pipelining, and achieves state-of-the-art performance
Dense Recurrent Neural Networks for Scene Labeling
Recently recurrent neural networks (RNNs) have demonstrated the ability to
improve scene labeling through capturing long-range dependencies among image
units. In this paper, we propose dense RNNs for scene labeling by exploring
various long-range semantic dependencies among image units. In comparison with
existing RNN based approaches, our dense RNNs are able to capture richer
contextual dependencies for each image unit via dense connections between each
pair of image units, which significantly enhances their discriminative power.
Besides, to select relevant and meanwhile restrain irrelevant dependencies for
each unit from dense connections, we introduce an attention model into dense
RNNs. The attention model enables automatically assigning more importance to
helpful dependencies while less weight to unconcerned dependencies. Integrating
with convolutional neural networks (CNNs), our method achieves state-of-the-art
performances on the PASCAL Context, MIT ADE20K and SiftFlow benchmarks.Comment: Tech. Repor
Deep Learning applied to NLP
Convolutional Neural Network (CNNs) are typically associated with Computer
Vision. CNNs are responsible for major breakthroughs in Image Classification
and are the core of most Computer Vision systems today. More recently CNNs have
been applied to problems in Natural Language Processing and gotten some
interesting results. In this paper, we will try to explain the basics of CNNs,
its different variations and how they have been applied to NLP
Graph-to-Tree Neural Networks for Learning Structured Input-Output Translation with Applications to Semantic Parsing and Math Word Problem
The celebrated Seq2Seq technique and its numerous variants achieve excellent
performance on many tasks such as neural machine translation, semantic parsing,
and math word problem solving. However, these models either only consider input
objects as sequences while ignoring the important structural information for
encoding, or they simply treat output objects as sequence outputs instead of
structural objects for decoding. In this paper, we present a novel
Graph-to-Tree Neural Networks, namely Graph2Tree consisting of a graph encoder
and a hierarchical tree decoder, that encodes an augmented graph-structured
input and decodes a tree-structured output. In particular, we investigated our
model for solving two problems, neural semantic parsing and math word problem.
Our extensive experiments demonstrate that our Graph2Tree model outperforms or
matches the performance of other state-of-the-art models on these tasks.Comment: Long Paper in EMNLP 2020. 12 pages including reference
Greedy, Joint Syntactic-Semantic Parsing with Stack LSTMs
We present a transition-based parser that jointly produces syntactic and
semantic dependencies. It learns a representation of the entire algorithm
state, using stack long short-term memories. Our greedy inference algorithm has
linear time, including feature extraction. On the CoNLL 2008--9 English shared
tasks, we obtain the best published parsing performance among models that
jointly learn syntax and semantics.Comment: Proceedings of CoNLL 2016; 13 pages, 5 figure
Easy-First Dependency Parsing with Hierarchical Tree LSTMs
We suggest a compositional vector representation of parse trees that relies
on a recursive combination of recurrent-neural network encoders. To demonstrate
its effectiveness, we use the representation as the backbone of a greedy,
bottom-up dependency parser, achieving state-of-the-art accuracies for English
and Chinese, without relying on external word embeddings. The parser's
implementation is available for download at the first author's webpage
Syntactic Dependency Representations in Neural Relation Classification
We investigate the use of different syntactic dependency representations in a
neural relation classification task and compare the CoNLL, Stanford Basic and
Universal Dependencies schemes. We further compare with a syntax-agnostic
approach and perform an error analysis in order to gain a better understanding
of the results.Comment: arXiv admin note: text overlap with arXiv:1804.0888
- …