9 research outputs found
Improving search effectiveness in sentence retrieval and novelty detection
In this thesis we study thoroughly sentence retrieval and novelty detec-
tion. We analyze the strengths and weaknesses of current state of the art
methods and, subsequently, new mechanisms to address sentence retrieval
and novelty detection are proposed.
Retrieval and novelty detection are related tasks: usually, we initially
apply a retrieval model that estimates properly the relevance of passages
(e.g. sentences) and generates a ranking of passages sorted by their relevance.
Next, this ranking is used as the input of a novelty detection module, which
tries to filter out redundant passages in the ranking.
The estimation of relevance at sentence level is di cult. Standard meth-
ods used to estimate relevance are simply based on matching query and
sentence terms. However, queries usually contain two or three terms and
sentences are also short. Therefore, the matching between query and sen-
tences is poor. In order to address this problem, we study how to enrich
this process with additional information: the context. The context refers
to the information provided by the surrounding sentences or the document
where the sentence is located. Such context reduces ambiguity and supplies
additional information not included in the sentence itself. Additionally, it is
important to estimate how important (central) a sentence is within the docu-
ment. These two components are studied following a formal framework based
on Statistical Language Models. In this respect, we demonstrate that these
components yield to improvements in current sentence retrieval methods.
In this thesis we work with collections of sentences that were extracted
from news. News not only explain facts but also express opinions that people
have about a particular event or topic. Therefore, the proper estimation of
which passages are opinionated may help to further improve the estimation
of relevance for sentences. We apply a formal methodology that helps us to
incorporate opinions into standard sentence retrieval methods. Additionally,
we propose simple empirical alternatives to incorporate query-independent
features into sentence retrieval models. We demonstrate that the incorpo-
ration of opinions to estimate relevance is an important factor that makes
sentence retrieval methods more effective. Along this study, we also analyze
query-independent features based on sentence length and named entities.
The combination of the context-based approach with the incorporation
of opinion-based features is straightforward. We study how to combine these
two approaches and its impact. We demonstrate that context-based models
are implicitly promoting sentences with opinions and, therefore, opinion-
based features do not help to further improve context-based methods.
The second part of this thesis is dedicated to novelty detection at sentence level. Because novelty is actually dependent on a retrieval ranking, we con-
sider here two approaches: a) the perfect-relevance approach, which consists
of using a ranking where all sentences are relevant; and b) the non-perfect rel-
evance approach, which consists of applying first a sentence retrieval method.
We rst study which baseline performs the best and, next, we propose a
number of variations. One of the mechanisms proposed is based on vocab-
ulary pruning. We demonstrate that considering terms from the top ranked
sentences in the original ranking helps to guide the estimation of novelty. The
application of Language Models to support novelty detection is another chal-
lenge that we face in this thesis. We apply di erent smoothing methods in the
context of alternative mechanisms to detect novelty. Additionally, we test a
mechanism based on mixture models that uses the Expectation-Maximization
algorithm to obtain automatically the novelty score of a sentence.
In the last part of this work we demonstrate that most novelty methods
lead to a strong re-ordering of the initial ranking. However, we show that the
top ranked sentences in the initial list are usually novel and re-ordering them
is often harmful. Therefore, we propose di erent mechanisms that determine
the position threshold where novelty detection should be initiated. In this
respect, we consider query-independent and query-dependent approaches.
Summing up, we identify important limitations of current sentence re-
trieval and novelty methods, and propose novel and effective methods
Ranking for Web Data Search Using On-The-Fly Data Integration
Ranking - the algorithmic decision on how relevant an information artifact is for a given information need and the sorting of artifacts by their concluded relevancy - is an integral part of every search engine. In this book we investigate how structured Web data can be leveraged for ranking with the goal to improve the effectiveness of search. We propose new solutions for ranking using on-the-fly data integration and experimentally analyze and evaluate them against the latest baselines
Focused Retrieval
Traditional information retrieval applications, such as Web search, return atomic units of retrieval, which are generically called ``documents''. Depending on the application, a document may be a Web page, an email message, a journal article, or any similar object. In contrast to this traditional approach, focused retrieval helps users better pin-point their exact information needs by returning results at the sub-document level. These results may consist of predefined document components~---~such as pages, sections, and paragraphs~---~or they may consist of arbitrary passages, comprising any sub-string of a document. If a document is marked up with XML, a focused retrieval system might return individual XML elements or ranges of elements. This thesis proposes and evaluates a number of approaches to focused retrieval, including methods based on XML markup and methods based on arbitrary passages. It considers the best unit of retrieval, explores methods for efficient sub-document retrieval, and evaluates formulae for sub-document scoring. Focused retrieval is also considered in the specific context of the Wikipedia, where methods for automatic vandalism detection and automatic link generation are developed and evaluated
ANSWERING TOPICAL INFORMATION NEEDS USING NEURAL ENTITY-ORIENTED INFORMATION RETRIEVAL AND EXTRACTION
In the modern world, search engines are an integral part of human lives. The field of Information Retrieval (IR) is concerned with finding material (usually documents) of an unstructured nature (usually text) that satisfies an information need (query) from within large collections (usually stored on computers). The search engine then displays a ranked list of results relevant to our query. Traditional document retrieval algorithms match a query to a document using the overlap of words in both. However, the last decade has seen the focus shifting to leveraging the rich semantic information available in the form of entities. Entities are uniquely identifiable objects or things such as places, events, diseases, etc. that exist in the real or fictional world. Entity-oriented search systems leverage the semantic information associated with entities (e.g., names, types, etc.) to better match documents to queries. Web search engines would provide better search results if they understand the meaning of a query.
This dissertation advances the state-of-the-art in IR by developing novel algorithmsthat understand text (query, document, question, sentence, etc.) at the semantic level. To this end, this dissertation aims to understand the fine-grained meaning of entities from the context in which the entities have been mentioned, for example, “oysters” in the context of food versus ecosystems. Further, we aim to automatically learn (vector) representations of entities that incorporate this fine-grained knowledge and knowledge about the query. This work refines the automatic understanding of text passages using deep learning, a modern artificial intelligence paradigm.
This dissertation utilized the semantic information extracted from entities to retrieve materials (text and entities) relevant to a query. The interplay between text and entities in the text is studied by addressing three related prediction problems: (1) Identify entities that are relevant for the query, (2) Understand an entity’s meaning in the context of the query, and (3) Identify text passages that elaborate the connection between the query and an entity.
The research presented in this dissertation may be integrated into a larger system de-signed for answering complex topical queries such as dark chocolate health benefits which require the search engine to automatically understand the connections between the query and the relevant material, thus transforming the search engine into an answering engine
Ranking for Web Data Search Using On-The-Fly Data Integration
Ranking - the algorithmic decision on how relevant an information artifact is for a given information need and the sorting of artifacts by their concluded relevancy - is an integral part of every search engine. In this book we investigate how structured Web data can be leveraged for ranking with the goal to improve the effectiveness of search. We propose new solutions for ranking using on-the-fly data integration and experimentally analyze and evaluate them against the latest baselines
On Term Selection Techniques for Patent Prior Art Search
A patent is a set of exclusive rights granted to an inventor to
protect his invention for
a limited period of time. Patent prior art search involves
finding previously granted
patents, scientific articles, product descriptions, or any other
published work that
may be relevant to a new patent application. Many well-known
information retrieval
(IR) techniques (e.g., typical query expansion methods), which
are proven effective
for ad hoc search, are unsuccessful for patent prior art search.
In this thesis, we
mainly investigate the reasons that generic IR techniques are not
effective for prior
art search on the CLEF-IP test collection. First, we analyse the
errors caused due to
data curation and experimental settings like applying
International Patent Classification
codes assigned to the patent topics to filter the search results.
Then, we investigate
the influence of term selection on retrieval performance on the
CLEF-IP prior art
test collection, starting with the description section of the
reference patent and using
language models (LM) and BM25 scoring functions. We find that an
oracular relevance
feedback system, which extracts terms from the judged relevant
documents
far outperforms the baseline (i.e., 0.11 vs. 0.48) and performs
twice as well on mean
average precision (MAP) as the best participant in CLEF-IP 2010
(i.e., 0.22 vs. 0.48).
We find a very clear term selection value threshold for use when
choosing terms. We
also notice that most of the useful feedback terms are actually
present in the original
query and hypothesise that the baseline system can be
substantially improved by removing
negative query terms. We try four simple automated approaches to
identify
negative terms for query reduction but we are unable to improve
on the baseline
performance with any of them. However, we show that a simple,
minimal feedback
interactive approach, where terms are selected from only the
first retrieved relevant
document outperforms the best result from CLEF-IP 2010,
suggesting the promise of
interactive methods for term selection in patent prior art
search
Novelty and Diversity in Retrieval Evaluation
Queries submitted to search engines rarely provide a complete and precise
description of a user's information need.
Most queries are ambiguous to some extent, having multiple interpretations.
For example, the seemingly unambiguous query ``tennis lessons'' might be submitted
by a user interested in attending classes in her neighborhood, seeking lessons
for her child, looking for online videos lessons, or planning to start a business
teaching tennis.
Search engines face the challenging task of satisfying different groups of users
having diverse information needs associated with a given query.
One solution is to optimize ranking functions to satisfy diverse sets of information
needs.
Unfortunately, existing evaluation frameworks do not support such optimization.
Instead, ranking functions are rewarded for satisfying the most likely intent
associated with a given query.
In this thesis, we propose a framework and associated evaluation metrics that are
capable of optimizing ranking functions to satisfy diverse information needs.
Our proposed measures explicitly reward those ranking functions capable of presenting
the user with information that is novel with respect to previously viewed
documents.
Our measures reflects quality of a ranking function by taking into account its
ability to satisfy diverse users submitting a query.
Moreover, the task of identifying and establishing test frameworks to compare
ranking functions on a web-scale can be tedious.
One reason for this problem is the dynamic nature of the web, where documents
are constantly added and updated, making it necessary for search engine developers
to seek additional human assessments.
Along with issues of novelty and diversity, we explore one approximate
approach to compare different ranking functions by overcoming the problem of
lacking complete human assessments.
We demonstrate that our approach is capable of accurately sorting ranking
functions based on their capability of satisfying diverse users, even in the
face of incomplete human assessments
A Language Model based Job Recommender
Matching candidates to job openings is a hard real world problem of economic interest
that thus far de es researchers' attempts to tackle it. Collaborative ltering methods,
which have proven to be highly e ective in other domains, have a di cult time nding
success when applied to Human Resources. Aside from the well known cold-start issue
there are other problems speci c to the recruitment world that explain the poor results
attained. In particular, fresh job openings arrive all the time and they have relatively
short expiration periods. In addition, there is a large volume of passive users who are
not actively looking for a job, but that would consider a change if a suitable o er came
their way. The two constraints combined suggest that content based models may be advantageous.
Previous attempts to attack the problem have tried to infer relevance from
a variety of sources. Indirect information captured from web server and search engine
logs, as well as eliciting direct feedback from users or recruiters have all been polled and
used to construct models. In contrast, this thesis departs from previous methods and
tries to exploit resume databases as a primary source for relevance information, a rich
resource that in my view remains greatly underutilized. Relevance models are adapted
for the task at hand and a formulation is derived to model job transitions as a Markov
process, with the justi cation being based on David Ricardo's principle of comparative
advantage. Empirical results are compiled following the Cran eld benchmarking
methodology and compared against several standard competing algorithms
Adaptation des systèmes de recherche d'information aux contextes : le cas des requêtes difficiles
Le domaine de la recherche d'information (RI) étudie la façon de trouver des informations pertinentes dans un ou plusieurs corpus, pour répondre à un besoin d'information. Dans un Système de Recherche d'Information (SRI) les informations cherchées sont des " documents " et un besoin d'information prend la forme d'une " requête " formulée par l'utilisateur. La performance d'un SRI est dépendante de la requête. Les requêtes pour lesquelles les SRI échouent (pas ou peu de documents pertinents retrouvés) sont appelées dans la littérature des " requêtes difficiles ". Cette difficulté peut être causée par l'ambiguïté des termes, la formulation peu claire de la requête, le manque de contexte du besoin d'information, la nature et la structure de la collection de documents, etc. Cette thèse vise à adapter les systèmes de recherche d'information à des contextes, en particulier dans le cadre de requêtes difficiles. Le manuscrit est structuré en cinq chapitres principaux, outre les remerciements, l'introduction générale et les conclusions et perspectives. Le premier chapitre représente une introduction à la RI. Nous développons le concept de pertinence, les modèles de recherche de la littérature, l'expansion de requêtes et le cadre d'évaluation utilisé dans les expérimentations qui ont servi à valider nos propositions. Chacun des chapitres suivants présente une de nos contributions. Les chapitres posent les problèmes, indiquent l'état de l'art, nos propositions théoriques et leur validation sur des collections de référence.
Dans le chapitre deux, nous présentons nos recherche sur la prise en compte du caractère ambigu des requêtes. L'ambiguïté des termes des requêtes peut en effet conduire à une mauvaise sélection de documents par les moteurs. Dans l'état de l'art, les méthodes de désambiguïsation qui donnent des bonnes performances sont supervisées, mais ce type de méthodes n'est pas applicable dans un contexte réel de RI, car elles nécessitent de l'information normalement indisponible. De plus, dans la littérature, la désambiguïsation de termes pour la RI est déclarée comme sous optimale. Dans ce contexte, nous proposons une méthode de désambiguïsation de requêtes non-supervisée et montrons son efficacité. Notre approche est interdisciplinaire, entre les domaines du traitement automatique du langage et la RI. L'objectif de la méthode de désambiguïsation non-supervisée que nous avons mise au point est de donner plus d'importance aux documents retrouvés par le moteur de recherche qui contient les mots de la requête avec les sens identifiés par la désambigüisation. Ce changement d'ordre des documents permet d'offrir une nouvelle liste qui contient plus de documents potentiellement pertinents pour l'utilisateur. Nous avons testé cette méthode de ré-ordonnancement des documents après désambigüisation en utilisant deux techniques de classification différentes (Naïve Bayes [Chifu et Ionescu, 2012] et classification spectrale [Chifu et al., 2015]), sur trois collections de documents et des requêtes de la compétition TREC (TREC7, TREC8, WT10G). Nous avons montré que la méthode de désambigüisation donne de bons résultats dans le cas où peu de documents pertinents sont retrouvés par le moteur de recherche (7,9% d'amélioration par rapport aux méthodes de l'état de l'art). Dans le chapitre trois, nous présentons le travail focalisé sur la prédiction de la difficulté des requêtes. En effet, si l'ambigüité est un facteur de difficulté, il n'est pas le seul. Nous avons complété la palette des prédicteurs de difficulté en nous appuyant sur l'état de l'art. Les prédicteurs existants ne sont pas suffisamment efficaces et, en conséquence, nous introduisons des nouvelles mesures de prédiction de la difficulté qui combinent les prédicteurs. Nous proposons également une méthode robuste pour évaluer les prédicteurs de difficulté des requêtes. En utilisant les combinaisons des prédicteurs, sur les collections TREC7 et TREC8, nous obtenons une amélioration de la qualité de la prédiction de 7,1% par rapport à l'état de l'art [Chifu, 2013]. Dans le quatrième chapitre nous nous intéressons à l'application des mesures de prédiction. Plus précisément, nous avons proposé une approche sélective de RI, c'est-à-dire que les prédicteurs sont utilisés pour décider quel moteur de recherche, parmi plusieurs, répondrait mieux pour une requête. Le modèle de décision est appris par un SVM (Séparateur à Vaste Marge). Nous avons testé notre modèle sur des collections de référence de TREC (Robust, WT10G, GOV2). Les modèles appris ont classé les requêtes de test avec plus de 90% d'exactitude. Par ailleurs, les résultats de la recherche ont été améliorés de plus de 11% en termes de performance, comparé à des méthodes non sélectives [Chifu et Mothe, 2014]. Dans le dernier chapitre, nous avons traité une problématique importante dans le domaine de la RI : l'expansion des requêtes par l'ajout de termes. Il est très difficile de prédire les paramètres d'expansion ou d'anticiper si une requête a besoin d'expansion, ou pas. Nous présentons notre contribution pour optimiser le paramètre lambda dans le cas de RM3 (un modèle pseudo-pertinence d'expansion des requêtes), par requête. Nous avons testé plusieurs hypothèses, à la fois avec et sans information préalable. Nous recherchons la quantité minimale d'information nécessaire pour que l'optimisation du paramètre d'expansion soit possible. Les résultats obtenus ne sont pas satisfaisants, même si nous avons utilisé une vaste plage de méthodes, comme les SVM, la régression, la régression logistique et les mesures de similarité. Par conséquent, ces observations peuvent renforcer la conclusion sur la difficulté de ce problème d'optimisation. Les recherches ont été menées non seulement au cours d'une mobilité de la recherche de trois mois à l'institut Technion de Haïfa, en Israël, en 2013, mais aussi par la suite, en gardant le contact avec l'équipe de Technion. A Haïfa, nous avons travaillé avec le professeur Oren Kurland et la doctorante Anna Shtok.
En conclusion, dans cette thèse nous avons proposé de nouvelles méthodes pour améliorer les performances des systèmes de RI, en s'appuyant sur la difficulté des requêtes. Les résultats des méthodes proposées dans les chapitres deux, trois et quatre montrent des améliorations importantes et ouvrent des perspectives pour de futures recherches. L'analyse présentée dans le chapitre cinq confirme la difficulté de la problématique d'optimisation du paramètre concerné et incite à creuser plus sur le paramétrage de l'expansion sélective des requêtesThe field of information retrieval (IR) studies the mechanisms to find relevant information in one or more document collections, in order to satisfy an information need. For an Information Retrieval System (IRS) the information to find is represented by "documents" and the information need takes the form of a "query" formulated by the user. IRS performance depends on queries. Queries for which the IRS fails (little or no relevant documents retrieved) are called in the literature "difficult queries". This difficulty may be caused by term ambiguity, unclear query formulation, the lack of context for the information need, the nature and structure of the document collection, etc. This thesis aims at adapting IRS to contexts, particularly in the case of difficult queries. The manuscript is organized into five main chapters, besides acknowledgements, general introduction, conclusions and perspectives. The first chapter is an introduction to RI. We develop the concept of relevance, the retrieval models from the literature, the query expansion models and the evaluation framework that was employed to validate our proposals. Each of the following chapters presents one of our contributions. Every chapter raises the research problem, indicates the related work, our theoretical proposals and their validation on benchmark collections.
In chapter two, we present our research on treating the ambiguous queries. The query term ambiguity can indeed lead to poor document retrieval of documents by the search engine. In the related work, the disambiguation methods that yield good performance are supervised, however such methods are not applicable in a real IR context, as they require the information which is normally unavailable. Moreover, in the literature, term disambiguation for IR is declared under optimal. In this context, we propose an unsupervised query disambiguation method and show its effectiveness. Our approach is interdisciplinary between the fields of natural language processing and IR. The goal of our unsupervised disambiguation method is to give more importance to the documents retrieved by the search engine that contain the query terms with the specific meaning identified by disambiguation. The document re-ranking provides a new document list that contains potentially relevant documents to the user. We tested this document re-ranking method after disambiguation using two different classification techniques (Naïve Bayes [Chifu and Ionescu, 2012] and spectral clustering [Chifu et al., 2015]), over three document collections and queries from the TREC competition (TREC7, TREC8, WT10G). We have shown that the disambiguation method in IR works well in the case of poorly performing queries (7.9% improvement compared to the methods of the state of the art). In chapter three, we present the work focused on query difficulty prediction. Indeed, if the ambiguity is a difficulty factor, it is not the only one. We completed the range of predictors of difficulty by relying on the state of the art. Existing predictors are not sufficiently effective and therefore we introduce new difficulty prediction measures that combine predictors. We also propose a robust method to evaluate difficulty predictors. Using predictor combinations, on TREC7 and TREC8 collections, we obtain an improvement of 7.1% in terms of prediction quality, compared to the state of the art [Chifu, 2013]. In the fourth chapter we focus on the application of difficulty predictors. Specifically, we proposed a selective IR approach, that is to say, predictors are employed to decide which search engine, among many, would perform better for a query. The decision model is learned by SVM (Support Vector Machine). We tested our model on TREC benchmark collections (Robust, WT10G, GOV2). The learned model classified the test queries with over 90% accuracy. Furthermore, the research results were improved by more than 11% in terms of performance, compared to non-selective methods [Chifu and Mothe, 2014]. In the last chapter, we treated an important issue in the field of IR: the query expansion by adding terms. It is very difficult to predict the expansion parameters or to anticipate whether a query needs the expansion or not. We present our contribution to optimize the lambda parameter in the case of RM3 (a pseudo-relevance model for query expansion), per query. We tested several hypotheses, both with and without prior information. We are searching for the minimum amount of information necessary in order for the optimization of the expansion parameter to be possible. The results are not satisfactory, even though we used a wide range of methods such as SVM, regression, logistic regression and similarity measures. Therefore, these findings may reinforce the conclusion regarding the difficulty of this optimization problem. The research was conducted not only during a mobility research three months at the Technion Institute in Haifa, Israel, in 2013, but thereafter, keeping in touch with the team of Technion. In Haifa, we worked with Professor Oren Kurland and PhD student Anna Shtok. In conclusion, in this thesis we proposed new methods to improve the performance of IRS, based on the query difficulty. The results of the methods proposed in chapters two, three and four show significant improvements and open perspectives for future research. The analysis in chapter five confirms the difficulty of the optimization problem of the concerned parameter and encourages thorough investigation on selective query expansion setting