40,010 research outputs found
A Topic-Sensitive Model for Salient Entity Linking
Abstract. In recent years, the amount of entities in large knowledge bases available on the Web has been increasing rapidly. Such entities can be used to bridge textual data with knowledge bases and thus help with many tasks, such as text understanding, word sense disambiguation and information retrieval. The key issue is to link the entity mentions in documents with the corresponding entities in knowledge bases, referred to as entity linking. In addition, for many entity-centric applications, entity salience for a document has become a very important factor. This raises an impending need to identify a set of salient entities that are central to the input document. In this paper, we introduce a new task of salient entity linking and propose a graph-based disambiguation solution, which integrates several features, especially a topic-sensitive model based on Wikipedia categories. Experimental results show that our method significantly outperforms the state-of-the-art entity linking methods in terms of precision, recall and F-measure
Same but Different: Distant Supervision for Predicting and Understanding Entity Linking Difficulty
Entity Linking (EL) is the task of automatically identifying entity mentions
in a piece of text and resolving them to a corresponding entity in a reference
knowledge base like Wikipedia. There is a large number of EL tools available
for different types of documents and domains, yet EL remains a challenging task
where the lack of precision on particularly ambiguous mentions often spoils the
usefulness of automated disambiguation results in real applications. A priori
approximations of the difficulty to link a particular entity mention can
facilitate flagging of critical cases as part of semi-automated EL systems,
while detecting latent factors that affect the EL performance, like
corpus-specific features, can provide insights on how to improve a system based
on the special characteristics of the underlying corpus. In this paper, we
first introduce a consensus-based method to generate difficulty labels for
entity mentions on arbitrary corpora. The difficulty labels are then exploited
as training data for a supervised classification task able to predict the EL
difficulty of entity mentions using a variety of features. Experiments over a
corpus of news articles show that EL difficulty can be estimated with high
accuracy, revealing also latent features that affect EL performance. Finally,
evaluation results demonstrate the effectiveness of the proposed method to
inform semi-automated EL pipelines.Comment: Preprint of paper accepted for publication in the 34th ACM/SIGAPP
Symposium On Applied Computing (SAC 2019
Pair-Linking for Collective Entity Disambiguation: Two Could Be Better Than All
Collective entity disambiguation aims to jointly resolve multiple mentions by
linking them to their associated entities in a knowledge base. Previous works
are primarily based on the underlying assumption that entities within the same
document are highly related. However, the extend to which these mentioned
entities are actually connected in reality is rarely studied and therefore
raises interesting research questions. For the first time, we show that the
semantic relationships between the mentioned entities are in fact less dense
than expected. This could be attributed to several reasons such as noise, data
sparsity and knowledge base incompleteness. As a remedy, we introduce MINTREE,
a new tree-based objective for the entity disambiguation problem. The key
intuition behind MINTREE is the concept of coherence relaxation which utilizes
the weight of a minimum spanning tree to measure the coherence between
entities. Based on this new objective, we design a novel entity disambiguation
algorithms which we call Pair-Linking. Instead of considering all the given
mentions, Pair-Linking iteratively selects a pair with the highest confidence
at each step for decision making. Via extensive experiments, we show that our
approach is not only more accurate but also surprisingly faster than many
state-of-the-art collective linking algorithms
Information structure and the referential status of linguistic expression : workshop as part of the 23th annual meetings of the Deutsche Gesellschaft fĂĽr Sprachwissenschaft in Leipzig, Leipzig, February 28 - March 2, 2001
This volume comprises papers that were given at the workshop Information Structure and the Referential Status of Linguistic Expressions, which we organized during the Deutsche Gesellschaft fĂĽr Sprachwissenschaft (DGfS) Conference in Leipzig in February 2001. At this workshop we discussed the connection between information structure and the referential interpretation of linguistic expressions, a topic mostly neglected in current linguistics research. One common aim of the papers is to find out to what extent the focus-background as well as the topic-comment structuring determine the referential interpretation of simple arguments like definite and indefinite NPs on the one hand and sentences on the other
Focus structure and the referential status of indefinite quantificational expressions
Many authors who subscribe to some version of generative syntax account for the two readings of [...] sentences [...] in terms of LF-ambiguity. There is assumed to be covert quantifier raising (QR), which results in two distinct possibilities for the indefinite quantificational expressions involved to take scope over each other [...] In this paper, an alternative account is proposed which dispenses with the idea that there are different scope relations involved in the readings of […] sentences [...] and, consequently, with QR as the syntactic operation to be assumed for generating the respective LFs. I argue that it is rather focus structure in connection with type semantic issues pertaining to the indefinite quantificational expressions involved which result in the different readings associated with [...] sentences
Leveraging Semantic Annotations to Link Wikipedia and News Archives
The incomprehensible amount of information available online has made it difficult to retrospect on past events. We propose a novel linking problem to connect excerpts from Wikipedia summarizing events to online news articles elaborating on them. To address the linking problem, we cast it into an information retrieval task by treating a given excerpt as a user query with the goal to retrieve a ranked list of relevant news articles. We find that Wikipedia excerpts often come with additional semantics, in their textual descriptions, representing the time, geolocations, and named entities involved in the event. Our retrieval model leverages text and semantic annotations as different dimensions of an event by estimating independent query models to rank documents. In our experiments on two datasets, we compare methods that consider different combinations of dimensions and find that the approach that leverages all dimensions suits our problem best
NASARI: a novel approach to a Semantically-Aware Representation of items
The semantic representation of individual word senses and concepts is of fundamental importance to several applications in Natural Language Processing. To date, concept modeling techniques have in the main based their representation either on lexicographic resources, such as WordNet, or on encyclopedic resources, such as Wikipedia. We propose a vector representation technique that combines the complementary knowledge of both these types of resource. Thanks to its use of explicit semantics combined with a novel cluster-based dimensionality reduction and an effective weighting scheme, our representation attains state-of-the-art performance on multiple datasets in two standard benchmarks: word similarity and sense clustering. We are releasing our vector representations at http://lcl.uniroma1.it/nasari/
- …