5,267 research outputs found
Parsing as Reduction
We reduce phrase-representation parsing to dependency parsing. Our reduction
is grounded on a new intermediate representation, "head-ordered dependency
trees", shown to be isomorphic to constituent trees. By encoding order
information in the dependency labels, we show that any off-the-shelf, trainable
dependency parser can be used to produce constituents. When this parser is
non-projective, we can perform discontinuous parsing in a very natural manner.
Despite the simplicity of our approach, experiments show that the resulting
parsers are on par with strong baselines, such as the Berkeley parser for
English and the best single system in the SPMRL-2014 shared task. Results are
particularly striking for discontinuous parsing of German, where we surpass the
current state of the art by a wide margin
Cross-Lingual Semantic Role Labeling with High-Quality Translated Training Corpus
Many efforts of research are devoted to semantic role labeling (SRL) which is
crucial for natural language understanding. Supervised approaches have achieved
impressing performances when large-scale corpora are available for
resource-rich languages such as English. While for the low-resource languages
with no annotated SRL dataset, it is still challenging to obtain competitive
performances. Cross-lingual SRL is one promising way to address the problem,
which has achieved great advances with the help of model transferring and
annotation projection. In this paper, we propose a novel alternative based on
corpus translation, constructing high-quality training datasets for the target
languages from the source gold-standard SRL annotations. Experimental results
on Universal Proposition Bank show that the translation-based method is highly
effective, and the automatic pseudo datasets can improve the target-language
SRL performances significantly.Comment: Accepted at ACL 202
Evaluating Scoped Meaning Representations
Semantic parsing offers many opportunities to improve natural language
understanding. We present a semantically annotated parallel corpus for English,
German, Italian, and Dutch where sentences are aligned with scoped meaning
representations in order to capture the semantics of negation, modals,
quantification, and presupposition triggers. The semantic formalism is based on
Discourse Representation Theory, but concepts are represented by WordNet
synsets and thematic roles by VerbNet relations. Translating scoped meaning
representations to sets of clauses enables us to compare them for the purpose
of semantic parser evaluation and checking translations. This is done by
computing precision and recall on matching clauses, in a similar way as is done
for Abstract Meaning Representations. We show that our matching tool for
evaluating scoped meaning representations is both accurate and efficient.
Applying this matching tool to three baseline semantic parsers yields F-scores
between 43% and 54%. A pilot study is performed to automatically find changes
in meaning by comparing meaning representations of translations. This
comparison turns out to be an additional way of (i) finding annotation mistakes
and (ii) finding instances where our semantic analysis needs to be improved.Comment: Camera-ready for LREC 201
- …