2,121 research outputs found
Event Coreference Resolution by Iteratively Unfolding Inter-dependencies among Events
We introduce a novel iterative approach for event coreference resolution that
gradually builds event clusters by exploiting inter-dependencies among event
mentions within the same chain as well as across event chains. Among event
mentions in the same chain, we distinguish within- and cross-document event
coreference links by using two distinct pairwise classifiers, trained
separately to capture differences in feature distributions of within- and
cross-document event clusters. Our event coreference approach alternates
between WD and CD clustering and combines arguments from both event clusters
after every merge, continuing till no more merge can be made. And then it
performs further merging between event chains that are both closely related to
a set of other chains of events. Experiments on the ECB+ corpus show that our
model outperforms state-of-the-art methods in joint task of WD and CD event
coreference resolution.Comment: EMNLP 201
Identity and Granularity of Events in Text
In this paper we describe a method to detect event descrip- tions in
different news articles and to model the semantics of events and their
components using RDF representations. We compare these descriptions to solve a
cross-document event coreference task. Our com- ponent approach to event
semantics defines identity and granularity of events at different levels. It
performs close to state-of-the-art approaches on the cross-document event
coreference task, while outperforming other works when assuming similar quality
of event detection. We demonstrate how granularity and identity are
interconnected and we discuss how se- mantic anomaly could be used to define
differences between coreference, subevent and topical relations.Comment: Invited keynote speech by Piek Vossen at Cicling 201
A Mention-Ranking Model for Abstract Anaphora Resolution
Resolving abstract anaphora is an important, but difficult task for text
understanding. Yet, with recent advances in representation learning this task
becomes a more tangible aim. A central property of abstract anaphora is that it
establishes a relation between the anaphor embedded in the anaphoric sentence
and its (typically non-nominal) antecedent. We propose a mention-ranking model
that learns how abstract anaphors relate to their antecedents with an
LSTM-Siamese Net. We overcome the lack of training data by generating
artificial anaphoric sentence--antecedent pairs. Our model outperforms
state-of-the-art results on shell noun resolution. We also report first
benchmark results on an abstract anaphora subset of the ARRAU corpus. This
corpus presents a greater challenge due to a mixture of nominal and pronominal
anaphors and a greater range of confounders. We found model variants that
outperform the baselines for nominal anaphors, without training on individual
anaphor data, but still lag behind for pronominal anaphors. Our model selects
syntactically plausible candidates and -- if disregarding syntax --
discriminates candidates using deeper features.Comment: In Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing (EMNLP). Copenhagen, Denmar
- …