3 research outputs found
Dynamic Oracles for Top-Down and In-Order Shift-Reduce Constituent Parsing
We introduce novel dynamic oracles for training two of the most accurate
known shift-reduce algorithms for constituent parsing: the top-down and
in-order transition-based parsers. In both cases, the dynamic oracles manage to
notably increase their accuracy, in comparison to that obtained by performing
classic static training. In addition, by improving the performance of the
state-of-the-art in-order shift-reduce parser, we achieve the best accuracy to
date (92.0 F1) obtained by a fully-supervised single-model greedy shift-reduce
constituent parser on the WSJ benchmark.Comment: Proceedings of EMNLP 2018. 11 page
Head-Driven Phrase Structure Grammar Parsing on Penn Treebank
Head-driven phrase structure grammar (HPSG) enjoys a uniform formalism
representing rich contextual syntactic and even semantic meanings. This paper
makes the first attempt to formulate a simplified HPSG by integrating
constituent and dependency formal representations into head-driven phrase
structure. Then two parsing algorithms are respectively proposed for two
converted tree representations, division span and joint span. As HPSG encodes
both constituent and dependency structure information, the proposed HPSG
parsers may be regarded as a sort of joint decoder for both types of structures
and thus are evaluated in terms of extracted or converted constituent and
dependency parsing trees. Our parser achieves new state-of-the-art performance
for both parsing tasks on Penn Treebank (PTB) and Chinese Penn Treebank,
verifying the effectiveness of joint learning constituent and dependency
structures. In details, we report 96.33 F1 of constituent parsing and 97.20\%
UAS of dependency parsing on PTB.Comment: Accepted by ACL 201
Two Local Models for Neural Constituent Parsing
Non-local features have been exploited by syntactic parsers for capturing
dependencies between sub output structures. Such features have been a key to
the success of state-of-the-art statistical parsers. With the rise of deep
learning, however, it has been shown that local output decisions can give
highly competitive accuracies, thanks to the power of dense neural input
representations that embody global syntactic information. We investigate two
conceptually simple local neural models for constituent parsing, which make
local decisions to constituent spans and CFG rules, respectively. Consistent
with previous findings along the line, our best model gives highly competitive
results, achieving the labeled bracketing F1 scores of 92.4% on PTB and 87.3%
on CTB 5.1.Comment: COLING 201