280 research outputs found
Toponym Disambiguation in Information Retrieval
In recent years, geography has acquired a great importance in the context of
Information Retrieval (IR) and, in general, of the automated processing of
information in text. Mobile devices that are able to surf the web and at the
same time inform about their position are now a common reality, together
with applications that can exploit this data to provide users with locally
customised information, such as directions or advertisements. Therefore,
it is important to deal properly with the geographic information that is
included in electronic texts. The majority of such kind of information is
contained as place names, or toponyms.
Toponym ambiguity represents an important issue in Geographical Information
Retrieval (GIR), due to the fact that queries are geographically constrained.
There has been a struggle to nd speci c geographical IR methods
that actually outperform traditional IR techniques. Toponym ambiguity
may constitute a relevant factor in the inability of current GIR systems to
take advantage from geographical knowledge. Recently, some Ph.D. theses
have dealt with Toponym Disambiguation (TD) from di erent perspectives,
from the development of resources for the evaluation of Toponym Disambiguation
(Leidner (2007)) to the use of TD to improve geographical scope
resolution (Andogah (2010)). The Ph.D. thesis presented here introduces
a TD method based on WordNet and carries out a detailed study of the
relationship of Toponym Disambiguation to some IR applications, such as
GIR, Question Answering (QA) and Web retrieval.
The work presented in this thesis starts with an introduction to the applications
in which TD may result useful, together with an analysis of the
ambiguity of toponyms in news collections. It could not be possible to
study the ambiguity of toponyms without studying the resources that are
used as placename repositories; these resources are the equivalent to language
dictionaries, which provide the di erent meanings of a given word.Buscaldi, D. (2010). Toponym Disambiguation in Information Retrieval [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/8912Palanci
How can voting mechanisms improve the robustness and generalizability of toponym disambiguation?
A vast amount of geographic information exists in natural language texts,
such as tweets and news. Extracting geographic information from texts is called
Geoparsing, which includes two subtasks: toponym recognition and toponym
disambiguation, i.e., to identify the geospatial representations of toponyms.
This paper focuses on toponym disambiguation, which is usually approached by
toponym resolution and entity linking. Recently, many novel approaches have
been proposed, especially deep learning-based approaches, such as CamCoder,
GENRE, and BLINK. In this paper, a spatial clustering-based voting approach
that combines several individual approaches is proposed to improve SOTA
performance in terms of robustness and generalizability. Experiments are
conducted to compare a voting ensemble with 20 latest and commonly-used
approaches based on 12 public datasets, including several highly ambiguous and
challenging datasets (e.g., WikToR and CLDW). The datasets are of six types:
tweets, historical documents, news, web pages, scientific articles, and
Wikipedia articles, containing in total 98,300 places across the world. The
results show that the voting ensemble performs the best on all the datasets,
achieving an average Accuracy@161km of 0.86, proving the generalizability and
robustness of the voting approach. Also, the voting ensemble drastically
improves the performance of resolving fine-grained places, i.e., POIs, natural
features, and traffic ways.Comment: 32 pages, 15 figure
The SpatialCIM methodology for spatial document coverage disambiguation and the entity recognition process aided by linguistic techniques.
Abstract. Nowadays it is becoming more usual for users to take into account the geographical localization of the documents in the retrieval information process. However, the conventional retrieval information systems based on key-word matching do not consider which words can represent geographical entities that are spatially related to other entities in the document. This paper presents the SpatialCIM methodology, which is based on three steps: pre-processing, data expansion and disambiguation. In the pre-processing step, the entity recognition process is carried out with the support of the Rembrandt tool. Additionally, a comparison between the performances regarding the discovery of the location entities in the texts of the Rembrandt tool against the use of a controlled vocabulary corresponding to the Brazilian geographic locations are presented. For the comparison a set of geographic labeled news covering the sugar cane culture in the Portuguese language is used. The results showed a F-measure value increase for the Rembrandt tool from 45% in the non-disambiguated process to 0.50 after disambiguation and from 35% to 38% using the controlled vocabulary. Additionally, the results showed the Rembrandt tool has a minimal amplitude difference between precision and recall, although the controlled vocabulary has always the biggest recall values.GeoDoc 2012, PAKDD 2012
Geocoding location expressions in Twitter messages: A preference learning method
Resolving location expressions in text to the correct physical location, also known as geocoding or grounding, is complicated by the fact that so many places around the world share the same name. Correct resolution is made even more difficult when there is little context to determine which place is intended, as in a 140-character Twitter message, or when location cues from different sources conflict, as may be the case among different metadata fields of a Twitter message. We used supervised machine learning to weigh the different fields of the Twitter message and the features of a world gazetteer to create a model that will prefer the correct gazetteer candidate to resolve the extracted expression. We evaluated our model using the F1 measure and compared it to similar algorithms. Our method achieved results higher than state-of-the-art competitors
- …