76,433 research outputs found
One for All: Neural Joint Modeling of Entities and Events
The previous work for event extraction has mainly focused on the predictions
for event triggers and argument roles, treating entity mentions as being
provided by human annotators. This is unrealistic as entity mentions are
usually predicted by some existing toolkits whose errors might be propagated to
the event trigger and argument role recognition. Few of the recent work has
addressed this problem by jointly predicting entity mentions, event triggers
and arguments. However, such work is limited to using discrete engineering
features to represent contextual information for the individual tasks and their
interactions. In this work, we propose a novel model to jointly perform
predictions for entity mentions, event triggers and arguments based on the
shared hidden representations from deep learning. The experiments demonstrate
the benefits of the proposed method, leading to the state-of-the-art
performance for event extraction.Comment: Accepted at The Thirty-Third AAAI Conference on Artificial
Intelligence (AAAI-19) (Honolulu, Hawaii, USA
Jointly Multiple Events Extraction via Attention-based Graph Information Aggregation
Event extraction is of practical utility in natural language processing. In
the real world, it is a common phenomenon that multiple events existing in the
same sentence, where extracting them are more difficult than extracting a
single event. Previous works on modeling the associations between events by
sequential modeling methods suffer a lot from the low efficiency in capturing
very long-range dependencies. In this paper, we propose a novel Jointly
Multiple Events Extraction (JMEE) framework to jointly extract multiple event
triggers and arguments by introducing syntactic shortcut arcs to enhance
information flow and attention-based graph convolution networks to model graph
information. The experiment results demonstrate that our proposed framework
achieves competitive results compared with state-of-the-art methods.Comment: accepted by EMNLP 201
- …