4,481 research outputs found
Arc-Standard Spinal Parsing with Stack-LSTMs
We present a neural transition-based parser for spinal trees, a dependency
representation of constituent trees. The parser uses Stack-LSTMs that compose
constituent nodes with dependency-based derivations. In experiments, we show
that this model adapts to different styles of dependency relations, but this
choice has little effect for predicting constituent structure, suggesting that
LSTMs induce useful states by themselves.Comment: IWPT 201
Parsing Speech: A Neural Approach to Integrating Lexical and Acoustic-Prosodic Information
In conversational speech, the acoustic signal provides cues that help
listeners disambiguate difficult parses. For automatically parsing spoken
utterances, we introduce a model that integrates transcribed text and
acoustic-prosodic features using a convolutional neural network over energy and
pitch trajectories coupled with an attention-based recurrent neural network
that accepts text and prosodic features. We find that different types of
acoustic-prosodic features are individually helpful, and together give
statistically significant improvements in parse and disfluency detection F1
scores over a strong text-only baseline. For this study with known sentence
boundaries, error analyses show that the main benefit of acoustic-prosodic
features is in sentences with disfluencies, attachment decisions are most
improved, and transcription errors obscure gains from prosody.Comment: Accepted in NAACL HLT 201
Improving Neural Parsing by Disentangling Model Combination and Reranking Effects
Recent work has proposed several generative neural models for constituency
parsing that achieve state-of-the-art results. Since direct search in these
generative models is difficult, they have primarily been used to rescore
candidate outputs from base parsers in which decoding is more straightforward.
We first present an algorithm for direct search in these generative models. We
then demonstrate that the rescoring results are at least partly due to implicit
model combination rather than reranking effects. Finally, we show that explicit
model combination can improve performance even further, resulting in new
state-of-the-art numbers on the PTB of 94.25 F1 when training only on gold data
and 94.66 F1 when using external data.Comment: ACL 2017. The first two authors contributed equall
- …