33 research outputs found
Event summarization on social media stream: retrospective and prospective tweet summarization
Le contenu généré dans les médias sociaux comme Twitter permet aux utilisateurs d'avoir un aperçu rétrospectif d'évènement
et de suivre les nouveaux développements dès qu'ils se produisent. Cependant, bien que Twitter soit une source d'information
importante, il est caractérisé par le volume et la vélocité des informations publiées qui rendent difficile le suivi de
l'évolution des évènements. Pour permettre de mieux tirer profit de ce nouveau vecteur d'information, deux tâches
complémentaires de recherche d'information dans les médias sociaux ont été introduites : la génération de résumé
rétrospectif qui vise à sélectionner les tweets pertinents et non redondant récapitulant "ce qui s'est passé" et l'envoi des
notifications prospectives dès qu'une nouvelle information pertinente est détectée.
Notre travail s'inscrit dans ce cadre. L'objectif de cette thèse est de faciliter le suivi d'événement, en fournissant des
outils de génération de synthèse adaptés à ce vecteur d'information. Les défis majeurs sous-jacents à notre problématique
découlent d'une part du volume, de la vélocité et de la variété des contenus publiés et, d'autre part, de la qualité des
tweets qui peut varier d'une manière considérable.
La tâche principale dans la notification prospective est l'identification en temps réel des tweets pertinents et non
redondants. Le système peut choisir de retourner les nouveaux tweets dès leurs détections où bien de différer leur envoi
afin de s'assurer de leur qualité. Dans ce contexte, nos contributions se situent à ces différents niveaux : Premièrement,
nous introduisons Word Similarity Extended Boolean Model (WSEBM), un modèle d'estimation de la pertinence qui exploite la
similarité entre les termes basée sur le word embedding et qui n'utilise pas les statistiques de flux. L'intuition sous-
jacente à notre proposition est que la mesure de similarité à base de word embedding est capable de considérer des mots
différents ayant la même sémantique ce qui permet de compenser le non-appariement des termes lors du calcul de la
pertinence. Deuxièmement, l'estimation de nouveauté d'un tweet entrant est basée sur la comparaison de ses termes avec les
termes des tweets déjà envoyés au lieu d'utiliser la comparaison tweet à tweet. Cette méthode offre un meilleur passage à
l'échelle et permet de réduire le temps d'exécution. Troisièmement, pour contourner le problème du seuillage de pertinence,
nous utilisons un classificateur binaire qui prédit la pertinence. L'approche proposée est basée sur l'apprentissage
supervisé adaptatif dans laquelle les signes sociaux sont combinés avec les autres facteurs de pertinence dépendants de la
requête. De plus, le retour des jugements de pertinence est exploité pour re-entrainer le modèle de classification. Enfin,
nous montrons que l'approche proposée, qui envoie les notifications en temps réel, permet d'obtenir des performances
prometteuses en termes de qualité (pertinence et nouveauté) avec une faible latence alors que les approches de l'état de
l'art tendent à favoriser la qualité au détriment de la latence.
Cette thèse explore également une nouvelle approche de génération du résumé rétrospectif qui suit un paradigme différent de
la majorité des méthodes de l'état de l'art. Nous proposons de modéliser le processus de génération de synthèse sous forme
d'un problème d'optimisation linéaire qui prend en compte la diversité temporelle des tweets. Les tweets sont filtrés et
regroupés d'une manière incrémentale en deux partitions basées respectivement sur la similarité du contenu et le temps de
publication. Nous formulons la génération du résumé comme étant un problème linéaire entier dans lequel les variables
inconnues sont binaires, la fonction objective est à maximiser et les contraintes assurent qu'au maximum un tweet par cluster est sélectionné dans la limite de la longueur du résumé fixée préalablement.User-generated content on social media, such as Twitter, provides in many cases, the latest news before traditional media,
which allows having a retrospective summary of events and being updated in a timely fashion whenever a new development
occurs. However, social media, while being a valuable source of information, can be also overwhelming given the volume and
the velocity of published information. To shield users from being overwhelmed by irrelevant and redundant posts,
retrospective summarization and prospective notification (real-time summarization) were introduced as two complementary
tasks of information seeking on document streams. The former aims to select a list of relevant and non-redundant tweets that
capture "what happened". In the latter, systems monitor the live posts stream and push relevant and novel notifications as
soon as possible.
Our work falls within these frameworks and focuses on developing a tweet summarization approaches for the two
aforementioned scenarios. It aims at providing summaries that capture the key aspects of the event of interest to help users
to efficiently acquire information and follow the development of long ongoing events from social media. Nevertheless, tweet
summarization task faces many challenges that stem from, on one hand, the high volume, the velocity and the variety of the
published information and, on the other hand, the quality of tweets, which can vary significantly.
In the prospective notification, the core task is the relevancy and the novelty detection in real-time. For timeliness, a
system may choose to push new updates in real-time or may choose to trade timeliness for higher notification quality. Our
contributions address these levels: First, we introduce Word Similarity Extended Boolean Model (WSEBM), a relevance model
that does not rely on stream statistics and takes advantage of word embedding model. We used word similarity instead of the
traditional weighting techniques. By doing this, we overcome the shortness and word mismatch issues in tweets. The intuition
behind our proposition is that context-aware similarity measure in word2vec is able to consider different words with the
same semantic meaning and hence allows offsetting the word mismatch issue when calculating the similarity between a tweet
and a topic. Second, we propose to compute the novelty score of the incoming tweet regarding all words of tweets already
pushed to the user instead of using the pairwise comparison. The proposed novelty detection method scales better and reduces
the execution time, which fits real-time tweet filtering. Third, we propose an adaptive Learning to Filter approach that
leverages social signals as well as query-dependent features. To overcome the issue of relevance threshold setting, we use a
binary classifier that predicts the relevance of the incoming tweet. In addition, we show the gain that can be achieved by
taking advantage of ongoing relevance feedback. Finally, we adopt a real-time push strategy and we show that the proposed
approach achieves a promising performance in terms of quality (relevance and novelty) with low cost of latency whereas the
state-of-the-art approaches tend to trade latency for higher quality.
This thesis also explores a novel approach to generate a retrospective summary that follows a different paradigm than the
majority of state-of-the-art methods. We consider the summary generation as an optimization problem that takes into account
the topical and the temporal diversity. Tweets are filtered and are incrementally clustered in two cluster types, namely
topical clusters based on content similarity and temporal clusters that depends on publication time. Summary generation is
formulated as integer linear problem in which unknowns variables are binaries, the objective function is to be maximized and
constraints ensure that at most one post per cluster is selected with respect to the defined summary length limit
Entity-Oriented Search
This open access book covers all facets of entity-oriented search—where “search” can be interpreted in the broadest sense of information access—from a unified point of view, and provides a coherent and comprehensive overview of the state of the art. It represents the first synthesis of research in this broad and rapidly developing area. Selected topics are discussed in-depth, the goal being to establish fundamental techniques and methods as a basis for future research and development. Additional topics are treated at a survey level only, containing numerous pointers to the relevant literature. A roadmap for future research, based on open issues and challenges identified along the way, rounds out the book. The book is divided into three main parts, sandwiched between introductory and concluding chapters. The first two chapters introduce readers to the basic concepts, provide an overview of entity-oriented search tasks, and present the various types and sources of data that will be used throughout the book. Part I deals with the core task of entity ranking: given a textual query, possibly enriched with additional elements or structural hints, return a ranked list of entities. This core task is examined in a number of different variants, using both structured and unstructured data collections, and numerous query formulations. In turn, Part II is devoted to the role of entities in bridging unstructured and structured data. Part III explores how entities can enable search engines to understand the concepts, meaning, and intent behind the query that the user enters into the search box, and how they can provide rich and focused responses (as opposed to merely a list of documents)—a process known as semantic search. The final chapter concludes the book by discussing the limitations of current approaches, and suggesting directions for future research. Researchers and graduate students are the primary target audience of this book. A general background in information retrieval is sufficient to follow the material, including an understanding of basic probability and statistics concepts as well as a basic knowledge of machine learning concepts and supervised learning algorithms
Knowledge graph exploration for natural language understanding in web information retrieval
In this thesis, we study methods to leverage information from fully-structured knowledge bases
(KBs), in particular the encyclopedic knowledge graph (KG) DBpedia, for different text-related
tasks from the area of information retrieval (IR) and natural language processing (NLP). The
key idea is to apply entity linking (EL) methods that identify mentions of KB entities in text,
and then exploit the structured information within KGs. Developing entity-centric methods for
text understanding using KG exploration is the focus of this work.
We aim to show that structured background knowledge is a means for improving performance in
different IR and NLP tasks that traditionally only make use of the unstructured text input itself.
Thereby, the KB entities mentioned in text act as connection between the unstructured text and
the structured KG. We focus in particular on how to best leverage the knowledge as contained in
such fully-structured (RDF) KGs like DBpedia with their labeled edges/predicates – which is in
contrast to previous work on Wikipedia-based approaches we build upon, which typically relies
on unlabeled graphs only. The contribution of this thesis can be structured along its three parts:
In Part I, we apply EL and semantify short text snippets with KB entities. While only retrieving
types and categories from DBpedia for each entity, we are able to leverage this information
to create semantically coherent clusters of text snippets. This pipeline of connecting text to
background knowledge via the mentioned entities will be reused in all following chapters.
In Part II, we focus on semantic similarity and extend the idea of semantifying text with entities
by proposing in Chapter 5 a model that represents whole documents by their entities. In this
model, comparing documents semantically with each other is viewed as the task of comparing
the semantic relatedness of the respective entities, which we address in Chapter 4. We propose
an unsupervised graph weighting schema and show that weighting the DBpedia KG leads to
better results on an existing entity ranking dataset. The exploration of weighted KG paths turns
out to be also useful when trying to disambiguate the entities from an open information extraction
(OIE) system in Chapter 6. With this weighting schema, the integration of KG information
for computing semantic document similarity in Chapter 5 becomes the task of comparing the two
KG subgraphs with each other, which we address by an approximate subgraph matching. Based
on a well-established evaluation dataset for semantic document similarity, we show that our unsupervised
method achieves competitive performance similar to other state-of-the-art methods.
Our results from this part indicate that KGs can contain helpful background knowledge, in particular
when exploring KG paths, but that selecting the relevant parts of the graph is an important
yet difficult challenge.
In Part III, we shift to the task of relevance ranking and first study in Chapter 7 how to best
retrieve KB entities for a given keyword query. Combining again text with KB information, we
extract entities from the top-k retrieved, query-specific documents and then link the documents
to two different KBs, namely Wikipedia and DBpedia. In a learning-to-rank setting, we study
extensively which features from the text, theWikipedia KB, and the DBpedia KG can be helpful
for ranking entities with respect to the query. Experimental results on two datasets, which build
upon existing TREC document retrieval collections, indicate that the document-based mention
frequency of an entity and the Wikipedia-based query-to-entity similarity are both important
features for ranking. The KG paths in contrast play only a minor role in this setting, even when
integrated with a semantic kernel extension. In Chapter 8, we further extend the integration of
query-specific text documents and KG information, by extracting not only entities, but also relations
from text. In this exploratory study based on a self-created relevance dataset, we find that
not all extracted relations are relevant with respect to the query, but that they often contain information
not contained within the DBpedia KG. The main insight from the research presented in
this part is that in a query-specific setting, established IR methods for document retrieval provide
an important source of information even for entity-centric tasks, and that a close integration of
relevant text document and background knowledge is promising.
Finally, in the concluding chapter we argue that future research should further address the integration
of KG information with entities and relations extracted from (specific) text documents,
as their potential seems to be not fully explored yet. The same holds also true for a better KG
exploration, which has gained some scientific interest in recent years. It seems to us that both aspects
will remain interesting problems in the next years, also because of the growing importance
of KGs for web search and knowledge modeling in industry and academia
Survey on Evaluation Methods for Dialogue Systems
In this paper we survey the methods and concepts developed for the evaluation
of dialogue systems. Evaluation is a crucial part during the development
process. Often, dialogue systems are evaluated by means of human evaluations
and questionnaires. However, this tends to be very cost and time intensive.
Thus, much work has been put into finding methods, which allow to reduce the
involvement of human labour. In this survey, we present the main concepts and
methods. For this, we differentiate between the various classes of dialogue
systems (task-oriented dialogue systems, conversational dialogue systems, and
question-answering dialogue systems). We cover each class by introducing the
main technologies developed for the dialogue systems and then by presenting the
evaluation methods regarding this class
Evaluating Information Retrieval and Access Tasks
This open access book summarizes the first two decades of the NII Testbeds and Community for Information access Research (NTCIR). NTCIR is a series of evaluation forums run by a global team of researchers and hosted by the National Institute of Informatics (NII), Japan. The book is unique in that it discusses not just what was done at NTCIR, but also how it was done and the impact it has achieved. For example, in some chapters the reader sees the early seeds of what eventually grew to be the search engines that provide access to content on the World Wide Web, today’s smartphones that can tailor what they show to the needs of their owners, and the smart speakers that enrich our lives at home and on the move. We also get glimpses into how new search engines can be built for mathematical formulae, or for the digital record of a lived human life. Key to the success of the NTCIR endeavor was early recognition that information access research is an empirical discipline and that evaluation therefore lay at the core of the enterprise. Evaluation is thus at the heart of each chapter in this book. They show, for example, how the recognition that some documents are more important than others has shaped thinking about evaluation design. The thirty-three contributors to this volume speak for the many hundreds of researchers from dozens of countries around the world who together shaped NTCIR as organizers and participants. This book is suitable for researchers, practitioners, and students—anyone who wants to learn about past and present evaluation efforts in information retrieval, information access, and natural language processing, as well as those who want to participate in an evaluation task or even to design and organize one
Knowledge-based and data-driven approaches for geographical information access
Geographical Information Access (GeoIA) can be defined as a way of retrieving information from textual collections that includes the automatic analysis and interpretation of the geographical constraints and terms present in queries and documents. This PhD thesis presents, describes and evaluates several heterogeneous approaches for the following three GeoIA tasks: Geographical Information Retrieval (GIR), Geographical Question Answering (GeoQA), and Textual Georeferencing (TG). The GIR task deals with user queries that search over documents (e.g. ¿vineyards in California?) and the GeoQA task treats questions that retrieve answers (e.g. ¿What is the capital of France?). On the other hand, TG is the task of associate one or more georeferences (such as polygons or coordinates in a geodetic reference system) to electronic documents.
Current state-of-the-art AI algorithms are not yet fully understanding the semantic meaning and the geographical constraints and terms present in queries and document collections. This thesis attempts to improve the effectiveness results of GeoIA tasks by: 1) improving the detection, understanding, and use of a part of the geographical and the thematic content of queries and documents with Toponym Recognition, Toponym Disambiguation and Natural Language Processing (NLP) techniques, and 2) combining Geographical Knowledge-Based Heuristics based on common sense with Data-Driven IR algorithms.
The main contributions of this thesis to the state-of-the-art of GeoIA tasks are:
1) The presentation of 10 novel approaches for GeoIA tasks: 3 approaches for GIR, 3 for GeoQA, and 4 for Textual Georeferencing (TG).
2) The evaluation of these novel approaches in these contexts: within official evaluation benchmarks, after evaluation benchmarks with the test collections, and with other specific datasets. Most of these algorithms have been evaluated in international evaluations and some of them achieved top-ranked state-of-the-art results, including top-performing results in GIR (GeoCLEF 2007) and TG (MediaEval 2014) benchmarks.
3) The experiments reported in this PhD thesis show that the approaches can combine effectively Geographical Knowledge and NLP with Data-Driven techniques to improve the efectiveness measures of the three Geographical Information Access tasks investigated.
4) TALPGeoIR: a novel GIR approach that combines Geographical Knowledge ReRanking (GeoKR), NLP and Relevance Feedback (RF) that achieved state-of-the-art results in official GeoCLEF benchmarks (Ferrés and Rodríguez, 2008; Mandl et al., 2008) and posterior experiments (Ferrés and Rodríguez, 2015a). This approach has been evaluated with the full GeoCLEF corpus (100 topics) and showed that GeoKR, NLP, and RF techniques evaluated separately or in combination improve the results in MAP and R-Precision effectiveness measures of the state-of-the-art IR algorithms TF-IDF, BM25 and InL2 and show statistical significance in most of the experiments.
5) GeoTALP-QA: a scope-based GeoQA approach for Spanish and English and its evaluation with a set of questions of the Spanish geography (Ferrés and Rodríguez, 2006).
6) Four state-of-the-art Textual Georeferencing approaches for informal and formal documents that achieved state-of-the-art results in evaluation benchmarks (Ferrés and Rodríguez, 2014) and posterior experiments (Ferrés and Rodríguez, 2011; Ferrés and Rodríguez, 2015b).L'Accés a la Informació Geogràfica (GeoAI) pot ser definit com una forma de recuperar informació de col·lecions textuals que inclou l'anàlisi automàtic i la interpretació dels termes i restriccions geogràfiques que apareixen en consultes i documents. Aquesta tesi doctoral presenta, descriu i avalua varies aproximacions heterogènies a les seguents tasques de GeoAI: Recuperació de la Informació Geogràfica (RIG), Cerca de la Resposta Geogràfica (GeoCR), i Georeferenciament Textual (GT). La tasca de RIG tracta amb consultes d'usuari que cerquen documents (e.g. ¿vinyes a California?) i la tasca GeoCR tracta de recuperar respostes concretes a preguntes (e.g. ¿Quina és la capital de França?). D'altra banda, GT es la tasca de relacionar una o més referències geogràfiques (com polígons o coordenades en un sistema de referència geodètic) a documents electrònics. Els algoritmes de l'estat de l'art actual en Intel·ligència Artificial encara no comprenen completament el significat semàntic i els termes i les restriccions geogràfiques presents en consultes i col·leccions de documents. Aquesta tesi intenta millorar els resultats en efectivitat de les tasques de GeoAI de la seguent manera: 1) millorant la detecció, comprensió, i la utilització d'una part del contingut geogràfic i temàtic de les consultes i documents amb tècniques de reconeixement de topònims, desambiguació de topònims, i Processament del Llenguatge Natural (PLN), i 2) combinant heurístics basats en Coneixement Geogràfic i en el sentit comú humà amb algoritmes de Recuperació de la Informació basats en dades. Les principals contribucions d'aquesta tesi a l'estat de l'art de les tasques de GeoAI són: 1) La presentació de 10 noves aproximacions a les tasques de GeoAI: 3 aproximacions per RIG, 3 per GeoCR, i 4 per Georeferenciament Textual (GT). 2) L'avaluació d'aquestes noves aproximacions en aquests contexts: en el marc d'avaluacions comparatives internacionals, posteriorment a avaluacions comparatives internacionals amb les col·lections de test, i amb altres conjunts de dades específics. La majoria d'aquests algoritmes han estat avaluats en avaluacions comparatives internacionals i alguns d'ells aconseguiren alguns dels millors resultats en l'estat de l'art, com per exemple els resultats en comparatives de RIG (GeoCLEF 2007) i GT (MediaEval 2014). 3) Els experiments descrits en aquesta tesi mostren que les aproximacions poden combinar coneixement geogràfic i PLN amb tècniques basades en dades per millorar les mesures d'efectivitat en les tres tasques de l'Accés a la Informació Geogràfica investigades. 4) TALPGeoIR: una nova aproximació a la RIG que combina Re-Ranking amb Coneixement Geogràfic (GeoKR), PLN i Retroalimentació de Rellevancia (RR) que aconseguí resultats en l'estat de l'art en comparatives oficials GeoCLEF (Ferrés and Rodríguez, 2008; Mandl et al., 2008) i en experiments posteriors (Ferrés and Rodríguez, 2015a). Aquesta aproximació ha estat avaluada amb el conjunt complert del corpus GeoCLEF (100 topics) i ha mostrat que les tècniques GeoKR, PLN i RR avaluades separadament o en combinació milloren els resultats en les mesures efectivitat MAP i R-Precision dels algoritmes de l'estat de l'art en Recuperació de la Infomació TF-IDF, BM25 i InL2 i a més mostren significació estadística en la majoria dels experiments. 5) GeoTALP-QA: una aproximació basada en l'àmbit geogràfic per espanyol i anglès i la seva avaluació amb un conjunt de preguntes de la geografía espanyola (Ferrés and Rodríguez, 2006). 6) Quatre aproximacions per al georeferenciament de documents formals i informals que obtingueren resultats en l'estat de l'art en avaluacions comparatives (Ferrés and Rodríguez, 2014) i en experiments posteriors (Ferrés and Rodríguez, 2011; Ferrés and Rodríguez, 2015b)
Knowledge-based and data-driven approaches for geographical information access
Geographical Information Access (GeoIA) can be defined as a way of retrieving information from textual collections that includes the automatic analysis and interpretation of the geographical constraints and terms present in queries and documents. This PhD thesis presents, describes and evaluates several heterogeneous approaches for the following three GeoIA tasks: Geographical Information Retrieval (GIR), Geographical Question Answering (GeoQA), and Textual Georeferencing (TG). The GIR task deals with user queries that search over documents (e.g. ¿vineyards in California?) and the GeoQA task treats questions that retrieve answers (e.g. ¿What is the capital of France?). On the other hand, TG is the task of associate one or more georeferences (such as polygons or coordinates in a geodetic reference system) to electronic documents.
Current state-of-the-art AI algorithms are not yet fully understanding the semantic meaning and the geographical constraints and terms present in queries and document collections. This thesis attempts to improve the effectiveness results of GeoIA tasks by: 1) improving the detection, understanding, and use of a part of the geographical and the thematic content of queries and documents with Toponym Recognition, Toponym Disambiguation and Natural Language Processing (NLP) techniques, and 2) combining Geographical Knowledge-Based Heuristics based on common sense with Data-Driven IR algorithms.
The main contributions of this thesis to the state-of-the-art of GeoIA tasks are:
1) The presentation of 10 novel approaches for GeoIA tasks: 3 approaches for GIR, 3 for GeoQA, and 4 for Textual Georeferencing (TG).
2) The evaluation of these novel approaches in these contexts: within official evaluation benchmarks, after evaluation benchmarks with the test collections, and with other specific datasets. Most of these algorithms have been evaluated in international evaluations and some of them achieved top-ranked state-of-the-art results, including top-performing results in GIR (GeoCLEF 2007) and TG (MediaEval 2014) benchmarks.
3) The experiments reported in this PhD thesis show that the approaches can combine effectively Geographical Knowledge and NLP with Data-Driven techniques to improve the efectiveness measures of the three Geographical Information Access tasks investigated.
4) TALPGeoIR: a novel GIR approach that combines Geographical Knowledge ReRanking (GeoKR), NLP and Relevance Feedback (RF) that achieved state-of-the-art results in official GeoCLEF benchmarks (Ferrés and Rodríguez, 2008; Mandl et al., 2008) and posterior experiments (Ferrés and Rodríguez, 2015a). This approach has been evaluated with the full GeoCLEF corpus (100 topics) and showed that GeoKR, NLP, and RF techniques evaluated separately or in combination improve the results in MAP and R-Precision effectiveness measures of the state-of-the-art IR algorithms TF-IDF, BM25 and InL2 and show statistical significance in most of the experiments.
5) GeoTALP-QA: a scope-based GeoQA approach for Spanish and English and its evaluation with a set of questions of the Spanish geography (Ferrés and Rodríguez, 2006).
6) Four state-of-the-art Textual Georeferencing approaches for informal and formal documents that achieved state-of-the-art results in evaluation benchmarks (Ferrés and Rodríguez, 2014) and posterior experiments (Ferrés and Rodríguez, 2011; Ferrés and Rodríguez, 2015b).L'Accés a la Informació Geogràfica (GeoAI) pot ser definit com una forma de recuperar informació de col·lecions textuals que inclou l'anàlisi automàtic i la interpretació dels termes i restriccions geogràfiques que apareixen en consultes i documents. Aquesta tesi doctoral presenta, descriu i avalua varies aproximacions heterogènies a les seguents tasques de GeoAI: Recuperació de la Informació Geogràfica (RIG), Cerca de la Resposta Geogràfica (GeoCR), i Georeferenciament Textual (GT). La tasca de RIG tracta amb consultes d'usuari que cerquen documents (e.g. ¿vinyes a California?) i la tasca GeoCR tracta de recuperar respostes concretes a preguntes (e.g. ¿Quina és la capital de França?). D'altra banda, GT es la tasca de relacionar una o més referències geogràfiques (com polígons o coordenades en un sistema de referència geodètic) a documents electrònics. Els algoritmes de l'estat de l'art actual en Intel·ligència Artificial encara no comprenen completament el significat semàntic i els termes i les restriccions geogràfiques presents en consultes i col·leccions de documents. Aquesta tesi intenta millorar els resultats en efectivitat de les tasques de GeoAI de la seguent manera: 1) millorant la detecció, comprensió, i la utilització d'una part del contingut geogràfic i temàtic de les consultes i documents amb tècniques de reconeixement de topònims, desambiguació de topònims, i Processament del Llenguatge Natural (PLN), i 2) combinant heurístics basats en Coneixement Geogràfic i en el sentit comú humà amb algoritmes de Recuperació de la Informació basats en dades. Les principals contribucions d'aquesta tesi a l'estat de l'art de les tasques de GeoAI són: 1) La presentació de 10 noves aproximacions a les tasques de GeoAI: 3 aproximacions per RIG, 3 per GeoCR, i 4 per Georeferenciament Textual (GT). 2) L'avaluació d'aquestes noves aproximacions en aquests contexts: en el marc d'avaluacions comparatives internacionals, posteriorment a avaluacions comparatives internacionals amb les col·lections de test, i amb altres conjunts de dades específics. La majoria d'aquests algoritmes han estat avaluats en avaluacions comparatives internacionals i alguns d'ells aconseguiren alguns dels millors resultats en l'estat de l'art, com per exemple els resultats en comparatives de RIG (GeoCLEF 2007) i GT (MediaEval 2014). 3) Els experiments descrits en aquesta tesi mostren que les aproximacions poden combinar coneixement geogràfic i PLN amb tècniques basades en dades per millorar les mesures d'efectivitat en les tres tasques de l'Accés a la Informació Geogràfica investigades. 4) TALPGeoIR: una nova aproximació a la RIG que combina Re-Ranking amb Coneixement Geogràfic (GeoKR), PLN i Retroalimentació de Rellevancia (RR) que aconseguí resultats en l'estat de l'art en comparatives oficials GeoCLEF (Ferrés and Rodríguez, 2008; Mandl et al., 2008) i en experiments posteriors (Ferrés and Rodríguez, 2015a). Aquesta aproximació ha estat avaluada amb el conjunt complert del corpus GeoCLEF (100 topics) i ha mostrat que les tècniques GeoKR, PLN i RR avaluades separadament o en combinació milloren els resultats en les mesures efectivitat MAP i R-Precision dels algoritmes de l'estat de l'art en Recuperació de la Infomació TF-IDF, BM25 i InL2 i a més mostren significació estadística en la majoria dels experiments. 5) GeoTALP-QA: una aproximació basada en l'àmbit geogràfic per espanyol i anglès i la seva avaluació amb un conjunt de preguntes de la geografía espanyola (Ferrés and Rodríguez, 2006). 6) Quatre aproximacions per al georeferenciament de documents formals i informals que obtingueren resultats en l'estat de l'art en avaluacions comparatives (Ferrés and Rodríguez, 2014) i en experiments posteriors (Ferrés and Rodríguez, 2011; Ferrés and Rodríguez, 2015b).Postprint (published version