118 research outputs found
Understanding and Predicting Characteristics of Test Collections in Information Retrieval
Research community evaluations in information retrieval, such as NIST's Text
REtrieval Conference (TREC), build reusable test collections by pooling
document rankings submitted by many teams. Naturally, the quality of the
resulting test collection thus greatly depends on the number of participating
teams and the quality of their submitted runs. In this work, we investigate: i)
how the number of participants, coupled with other factors, affects the quality
of a test collection; and ii) whether the quality of a test collection can be
inferred prior to collecting relevance judgments from human assessors.
Experiments conducted on six TREC collections illustrate how the number of
teams interacts with various other factors to influence the resulting quality
of test collections. We also show that the reusability of a test collection can
be predicted with high accuracy when the same document collection is used for
successive years in an evaluation campaign, as is common in TREC.Comment: Accepted as a full paper at iConference 202
Report on the 1st Simulation for Information Retrieval Workshop (Sim4IR 2021) at SIGIR 2021
Simulation is used as a low-cost and repeatable means of experimentation. As Information Retrieval (IR) researchers, we are no strangers to the idea of using simulation within our own field---such as the traditional means of IR system evaluation as manifested through the Cranfield paradigm. While simulation has been used in other areas of IR research (such as the study of user behaviours), we argue that the potential for using simulation has been recognised by relatively few IR researchers so far.
To this end, the Sim4IR workshop was held online on July 15th, 2021 in conjunction with ACM SIGIR 2021. Building on past efforts, the goal of the workshop was to create a forum for researchers and practitioners to promote methodology and development of more widespread use of simulation for IR evaluation. Around 80 participants took part over two sessions. A total of two keynotes, three original paper presentations, and eight 'encore talks' were presented. The main conclusions from the resultant discussion were that simulation has the potential to offer solutions to the limitations of existing evaluation methodologies, but there is more research needed toward developing realistic user simulators; and the development and sharing of simulators, in the form of toolkits and online services, is critical for successful uptake.publishedVersio
Pretrained Transformers for Text Ranking: BERT and Beyond
The goal of text ranking is to generate an ordered list of texts retrieved
from a corpus in response to a query. Although the most common formulation of
text ranking is search, instances of the task can also be found in many natural
language processing applications. This survey provides an overview of text
ranking with neural network architectures known as transformers, of which BERT
is the best-known example. The combination of transformers and self-supervised
pretraining has been responsible for a paradigm shift in natural language
processing (NLP), information retrieval (IR), and beyond. In this survey, we
provide a synthesis of existing work as a single point of entry for
practitioners who wish to gain a better understanding of how to apply
transformers to text ranking problems and researchers who wish to pursue work
in this area. We cover a wide range of modern techniques, grouped into two
high-level categories: transformer models that perform reranking in multi-stage
architectures and dense retrieval techniques that perform ranking directly.
There are two themes that pervade our survey: techniques for handling long
documents, beyond typical sentence-by-sentence processing in NLP, and
techniques for addressing the tradeoff between effectiveness (i.e., result
quality) and efficiency (e.g., query latency, model and index size). Although
transformer architectures and pretraining techniques are recent innovations,
many aspects of how they are applied to text ranking are relatively well
understood and represent mature techniques. However, there remain many open
research questions, and thus in addition to laying out the foundations of
pretrained transformers for text ranking, this survey also attempts to
prognosticate where the field is heading
Increasing the Efficiency of High-Recall Information Retrieval
The goal of high-recall information retrieval (HRIR) is to find all,
or nearly all, relevant documents while maintaining reasonable assessment effort.
Achieving high recall is a key problem in the use of applications such as
electronic discovery, systematic review, and construction of test collections for
information retrieval tasks. State-of-the-art HRIR systems commonly rely on iterative relevance feedback in which
human assessors continually assess machine learning-selected documents.
The relevance of the assessed documents is then fed back to
the machine learning model to improve its ability to select the next set of
potentially relevant documents for assessment. In many instances, thousands of human assessments might be required to achieve high recall. These assessments represent the main cost of such HRIR
applications. Therefore, their effectiveness in achieving high recall
is limited by their reliance on human input when assessing the relevance of
documents. In this thesis, we test different methods in order to improve the effectiveness and
efficiency of finding relevant documents using state-of-the-art HRIR
system. With regard to the effectiveness, we try to build a machine-learned
model that retrieves relevant documents more accurately.
For efficiency, we try to help human assessors make
relevance assessments more easily and quickly via our HRIR system.
Furthermore, we try to establish a stopping criteria for the
assessment process so as to avoid excessive assessment.
In particular, we hypothesize that total assessment effort to achieve high
recall can be reduced by using shorter document excerpts
(e.g., extractive summaries) in place of full documents for the assessment of
relevance and using a high-recall retrieval system based on continuous active
learning (CAL). In order to test this hypothesis, we implemented a
high-recall retrieval system based on state-of-the-art implementation of CAL. This high-recall retrieval system could display
either full documents or short document excerpts for relevance assessment.
A search engine was also integrated into our system to provide
assessors the option of conducting interactive search and judging.
We conducted a simulation study, and separately, a 50-person controlled user study to test our hypothesis.
The results of the simulation study show that judging even a single
extracted sentence for relevance feedback may be adequate for CAL
to achieve high recall. The results of the controlled user study
confirmed that human assessors were able to find
a significantly larger number of relevant documents within limited time when they used the
system with paragraph-length document excerpts as opposed to full documents.
In addition, we found that allowing participants to compose and execute their
own search queries did not improve their ability to find relevant
documents and, by some measures, impaired performance.
Moreover, integrating sampling methods with active
learning can yield accurate estimates of the number of relevant documents, and thus avoid excessive assessments
- …