689 research outputs found
A Simple and Accurate Syntax-Agnostic Neural Model for Dependency-based Semantic Role Labeling
We introduce a simple and accurate neural model for dependency-based semantic
role labeling. Our model predicts predicate-argument dependencies relying on
states of a bidirectional LSTM encoder. The semantic role labeler achieves
competitive performance on English, even without any kind of syntactic
information and only using local inference. However, when automatically
predicted part-of-speech tags are provided as input, it substantially
outperforms all previous local models and approaches the best reported results
on the English CoNLL-2009 dataset. We also consider Chinese, Czech and Spanish
where our approach also achieves competitive results. Syntactic parsers are
unreliable on out-of-domain data, so standard (i.e., syntactically-informed)
SRL models are hindered when tested in this setting. Our syntax-agnostic model
appears more robust, resulting in the best reported results on standard
out-of-domain test sets.Comment: To appear in CoNLL 201
Cross-Lingual Semantic Role Labeling with High-Quality Translated Training Corpus
Many efforts of research are devoted to semantic role labeling (SRL) which is
crucial for natural language understanding. Supervised approaches have achieved
impressing performances when large-scale corpora are available for
resource-rich languages such as English. While for the low-resource languages
with no annotated SRL dataset, it is still challenging to obtain competitive
performances. Cross-lingual SRL is one promising way to address the problem,
which has achieved great advances with the help of model transferring and
annotation projection. In this paper, we propose a novel alternative based on
corpus translation, constructing high-quality training datasets for the target
languages from the source gold-standard SRL annotations. Experimental results
on Universal Proposition Bank show that the translation-based method is highly
effective, and the automatic pseudo datasets can improve the target-language
SRL performances significantly.Comment: Accepted at ACL 202
Transition-based Semantic Role Labeling with Pointer Networks
Semantic role labeling (SRL) focuses on recognizing the predicate-argument
structure of a sentence and plays a critical role in many natural language
processing tasks such as machine translation and question answering.
Practically all available methods do not perform full SRL, since they rely on
pre-identified predicates, and most of them follow a pipeline strategy, using
specific models for undertaking one or several SRL subtasks. In addition,
previous approaches have a strong dependence on syntactic information to
achieve state-of-the-art performance, despite being syntactic trees equally
hard to produce. These simplifications and requirements make the majority of
SRL systems impractical for real-world applications. In this article, we
propose the first transition-based SRL approach that is capable of completely
processing an input sentence in a single left-to-right pass, with neither
leveraging syntactic information nor resorting to additional modules. Thanks to
our implementation based on Pointer Networks, full SRL can be accurately and
efficiently done in , achieving the best performance to date on the
majority of languages from the CoNLL-2009 shared task.Comment: Final peer-reviewed manuscript accepted for publication in
Knowledge-Based System
- …