34 research outputs found
Constituent Parsing as Sequence Labeling
We introduce a method to reduce constituent parsing to sequence labeling. For
each word w_t, it generates a label that encodes: (1) the number of ancestors
in the tree that the words w_t and w_{t+1} have in common, and (2) the
nonterminal symbol at the lowest common ancestor. We first prove that the
proposed encoding function is injective for any tree without unary branches. In
practice, the approach is made extensible to all constituency trees by
collapsing unary branches. We then use the PTB and CTB treebanks as testbeds
and propose a set of fast baselines. We achieve 90.7% F-score on the PTB test
set, outperforming the Vinyals et al. (2015) sequence-to-sequence parser. In
addition, sacrificing some accuracy, our approach achieves the fastest
constituent parsing speeds reported to date on PTB by a wide margin.Comment: EMNLP 2018 (Long Papers). Revised version with improved results after
fixing evaluation bu
Better, Faster, Stronger Sequence Tagging Constituent Parsers
Sequence tagging models for constituent parsing are faster, but less accurate
than other types of parsers. In this work, we address the following weaknesses
of such constituent parsers: (a) high error rates around closing brackets of
long constituents, (b) large label sets, leading to sparsity, and (c) error
propagation arising from greedy decoding. To effectively close brackets, we
train a model that learns to switch between tagging schemes. To reduce
sparsity, we decompose the label set and use multi-task learning to jointly
learn to predict sublabels. Finally, we mitigate issues from greedy decoding
through auxiliary losses and sentence-level fine-tuning with policy gradient.
Combining these techniques, we clearly surpass the performance of sequence
tagging constituent parsers on the English and Chinese Penn Treebanks, and
reduce their parsing time even further. On the SPMRL datasets, we observe even
greater improvements across the board, including a new state of the art on
Basque, Hebrew, Polish and Swedish.Comment: NAACL 2019 (long papers). Contains corrigendu