211 research outputs found
Evaluation Measures for Relevance and Credibility in Ranked Lists
Recent discussions on alternative facts, fake news, and post truth politics
have motivated research on creating technologies that allow people not only to
access information, but also to assess the credibility of the information
presented to them by information retrieval systems. Whereas technology is in
place for filtering information according to relevance and/or credibility, no
single measure currently exists for evaluating the accuracy or precision (and
more generally effectiveness) of both the relevance and the credibility of
retrieved results. One obvious way of doing so is to measure relevance and
credibility effectiveness separately, and then consolidate the two measures
into one. There at least two problems with such an approach: (I) it is not
certain that the same criteria are applied to the evaluation of both relevance
and credibility (and applying different criteria introduces bias to the
evaluation); (II) many more and richer measures exist for assessing relevance
effectiveness than for assessing credibility effectiveness (hence risking
further bias).
Motivated by the above, we present two novel types of evaluation measures
that are designed to measure the effectiveness of both relevance and
credibility in ranked lists of retrieval results. Experimental evaluation on a
small human-annotated dataset (that we make freely available to the research
community) shows that our measures are expressive and intuitive in their
interpretation
A Coherent Unsupervised Model for Toponym Resolution
Toponym Resolution, the task of assigning a location mention in a document to
a geographic referent (i.e., latitude/longitude), plays a pivotal role in
analyzing location-aware content. However, the ambiguities of natural language
and a huge number of possible interpretations for toponyms constitute
insurmountable hurdles for this task. In this paper, we study the problem of
toponym resolution with no additional information other than a gazetteer and no
training data. We demonstrate that a dearth of large enough annotated data
makes supervised methods less capable of generalizing. Our proposed method
estimates the geographic scope of documents and leverages the connections
between nearby place names as evidence to resolve toponyms. We explore the
interactions between multiple interpretations of mentions and the relationships
between different toponyms in a document to build a model that finds the most
coherent resolution. Our model is evaluated on three news corpora, two from the
literature and one collected and annotated by us; then, we compare our methods
to the state-of-the-art unsupervised and supervised techniques. We also examine
three commercial products including Reuters OpenCalais, Yahoo! YQL Placemaker,
and Google Cloud Natural Language API. The evaluation shows that our method
outperforms the unsupervised technique as well as Reuters OpenCalais and Google
Cloud Natural Language API on all three corpora; also, our method shows a
performance close to that of the state-of-the-art supervised method and
outperforms it when the test data has 40% or more toponyms that are not seen in
the training data.Comment: 9 pages (+1 page reference), WWW '18 Proceedings of the 2018 World
Wide Web Conferenc
A Survey on Retrieval of Mathematical Knowledge
We present a short survey of the literature on indexing and retrieval of
mathematical knowledge, with pointers to 72 papers and tentative taxonomies of
both retrieval problems and recurring techniques.Comment: CICM 2015, 20 page
280 Birds with One Stone: Inducing Multilingual Taxonomies from Wikipedia using Character-level Classification
We propose a simple, yet effective, approach towards inducing multilingual
taxonomies from Wikipedia. Given an English taxonomy, our approach leverages
the interlanguage links of Wikipedia followed by character-level classifiers to
induce high-precision, high-coverage taxonomies in other languages. Through
experiments, we demonstrate that our approach significantly outperforms the
state-of-the-art, heuristics-heavy approaches for six languages. As a
consequence of our work, we release presumably the largest and the most
accurate multilingual taxonomic resource spanning over 280 languages
Falcon 2.0: An Entity and Relation Linking Tool over Wikidata
The Natural Language Processing (NLP) community has significantly contributed to the solutions for entity and relation recognition from a natural language text, and possibly linking them to proper matches in Knowledge Graphs (KGs). Considering Wikidata as the background KG, there are still limited tools to link knowledge within the text to Wikidata. In this paper, we present Falcon 2.0, the first joint entity and relation linking tool over Wikidata. It receives a short natural language text in the English language and outputs a ranked list of entities and relations annotated with the proper candidates in Wikidata. The candidates are represented by their Internationalized Resource Identifier (IRI) in Wikidata. Falcon 2.0 resorts to the English language model for the recognition task (e.g., N-Gram tiling and N-Gram splitting), and then an optimization approach for the linking task. We have empirically studied the performance of Falcon 2.0 on Wikidata and concluded that it outperforms all the existing baselines. Falcon 2.0 is open source and can be reused by the community; all the required instructions of Falcon 2.0 are well-documented at our GitHub repository (https://github.com/SDM-TIB/falcon2.0). We also demonstrate an online API, which can be run without any technical expertise. Falcon 2.0 and its background knowledge bases are available as resources at https://labs.tib.eu/falcon/falcon2/
- …