33 research outputs found
On Design and Evaluation of High-Recall Retrieval Systems for Electronic Discovery
High-recall retrieval is an information retrieval task model where the goal is to
identify, for human consumption, all, or as many as practicable, documents relevant to
a particular information need.
This thesis investigates the ways in which one can evaluate high-recall retrieval
systems and explores several design considerations that should be accounted for when designing
such systems for electronic discovery.
The primary contribution of this work is a framework for conducting high-recall retrieval
experimentation in a controlled and repeatable way.
This framework builds upon lessons learned from similar tasks to facilitate the use
of retrieval systems on collections that cannot be distributed due to the sensitivity
or privacy of the material contained within.
Accordingly, a Web API is used to distribute document collections,
informations needs, and corresponding relevance assessments in a one-document-at-a-time manner.
Validation is conducted through the successful deployment of this architecture in the 2015 TREC
Total Recall track over the live Web and in controlled environments.
Using the runs submitted to the Total Recall track and other test collections, we explore the
efficacy of a variety of new and existing effectiveness measures to high-recall retrieval tasks.
We find that summarizing the trade-off between recall and the effort required to attain that
recall is a non-trivial task and that several measures are sensitive to properties of the test
collections themselves.
We conclude that the gain curve, a de facto standard, and variants of the gain curve are the most robust to
variations in test collection properties and the evaluation of high-recall systems.
This thesis also explores the effect that non-authoritative, surrogate assessors can have
when training machine learning algorithms.
Contrary to popular thought, we find that surrogate assessors appear to be inferior
to authoritative assessors due to differences of opinion rather than innate inferiority in
their ability to identify relevance.
Furthermore, we show that several techniques for diversifying and liberalizing a surrogate
assessor's conception of relevance can yield substantial improvement in the surrogate
and, in some cases, rival the authority.
Finally, we present the results of a user study conducted to investigate the effect that
three archetypal high-recall retrieval systems have on judging behaviour.
Compared to using random and uncertainty sampling, selecting documents for training using relevance sampling significantly decreases the probability that
a user will identify that document as relevant.
On the other hand, no substantial difference between the test conditions is observed in the time taken to render
such assessments
Increasing the Efficiency of High-Recall Information Retrieval
The goal of high-recall information retrieval (HRIR) is to find all,
or nearly all, relevant documents while maintaining reasonable assessment effort.
Achieving high recall is a key problem in the use of applications such as
electronic discovery, systematic review, and construction of test collections for
information retrieval tasks. State-of-the-art HRIR systems commonly rely on iterative relevance feedback in which
human assessors continually assess machine learning-selected documents.
The relevance of the assessed documents is then fed back to
the machine learning model to improve its ability to select the next set of
potentially relevant documents for assessment. In many instances, thousands of human assessments might be required to achieve high recall. These assessments represent the main cost of such HRIR
applications. Therefore, their effectiveness in achieving high recall
is limited by their reliance on human input when assessing the relevance of
documents. In this thesis, we test different methods in order to improve the effectiveness and
efficiency of finding relevant documents using state-of-the-art HRIR
system. With regard to the effectiveness, we try to build a machine-learned
model that retrieves relevant documents more accurately.
For efficiency, we try to help human assessors make
relevance assessments more easily and quickly via our HRIR system.
Furthermore, we try to establish a stopping criteria for the
assessment process so as to avoid excessive assessment.
In particular, we hypothesize that total assessment effort to achieve high
recall can be reduced by using shorter document excerpts
(e.g., extractive summaries) in place of full documents for the assessment of
relevance and using a high-recall retrieval system based on continuous active
learning (CAL). In order to test this hypothesis, we implemented a
high-recall retrieval system based on state-of-the-art implementation of CAL. This high-recall retrieval system could display
either full documents or short document excerpts for relevance assessment.
A search engine was also integrated into our system to provide
assessors the option of conducting interactive search and judging.
We conducted a simulation study, and separately, a 50-person controlled user study to test our hypothesis.
The results of the simulation study show that judging even a single
extracted sentence for relevance feedback may be adequate for CAL
to achieve high recall. The results of the controlled user study
confirmed that human assessors were able to find
a significantly larger number of relevant documents within limited time when they used the
system with paragraph-length document excerpts as opposed to full documents.
In addition, we found that allowing participants to compose and execute their
own search queries did not improve their ability to find relevant
documents and, by some measures, impaired performance.
Moreover, integrating sampling methods with active
learning can yield accurate estimates of the number of relevant documents, and thus avoid excessive assessments
Technology Assisted Reviews: Finding the Last Few Relevant Documents by Asking Yes/No Questions to Reviewers
The goal of a technology-assisted review is to achieve high recall with low
human effort. Continuous active learning algorithms have demonstrated good
performance in locating the majority of relevant documents in a collection,
however their performance is reaching a plateau when 80\%-90\% of them has been
found. Finding the last few relevant documents typically requires exhaustively
reviewing the collection. In this paper, we propose a novel method to identify
these last few, but significant, documents efficiently. Our method makes the
hypothesis that entities carry vital information in documents, and that
reviewers can answer questions about the presence or absence of an entity in
the missing relevance documents. Based on this we devise a sequential Bayesian
search method that selects the optimal sequence of questions to ask. The
experimental results show that our proposed method can greatly improve
performance requiring less reviewing effort.Comment: This paper is accepted by SIGIR 201
Filtering News from Document Streams: Evaluation Aspects and Modeled Stream Utility
Events like hurricanes, earthquakes,
or accidents can impact a large number of people. Not only are people in the
immediate vicinity of the event affected, but concerns about their well-being are
shared by the local government and well-wishers across the world.
The latest information about news events
could be of use to government and aid agencies in order to make informed decisions on
providing necessary support, security and relief. The general public
avails of news updates via dedicated news feeds or broadcasts, and lately,
via social media services
like Facebook or Twitter.
Retrieving the latest information about newsworthy events from the world-wide web
is thus of importance to a large section of society.
As new content on a multitude of topics is continuously being published on the web,
specific event related information needs to be filtered from the resulting
stream of documents.
We present in this thesis, a user-centric evaluation measure for
evaluating systems that filter news related information from document streams.
Our proposed evaluation measure, Modeled Stream Utility (MSU), models
users accessing information from a stream of sentences
produced by a news update filtering system.
The user model allows for simulating a large number of users with different
characteristic stream browsing behavior. Through simulation,
MSU estimates the utility of a system for an
average user browsing a stream of sentences.
Our results show that system performance is sensitive to a user population's
stream browsing behavior and that
existing evaluation metrics correspond to very specific types of user behavior.
To evaluate systems that filter sentences from a document stream,
we need a set of judged sentences. This judged set is
a subset of all the sentences returned by all systems, and is
typically constructed by pooling
together the highest quality sentences,
as determined by respective system assigned scores for each sentence.
Sentences in the pool are manually assessed and
the resulting set of judged sentences is then used to compute system performance metrics.
In this thesis, we investigate the effect of including duplicates of
judged sentences, into the judged set, on system performance evaluation. We also develop an
alternative pooling methodology, that given the MSU user model,
selects sentences for pooling based on the probability of a sentences being read by
modeled users.
Our research lays the foundation for interesting future work for utilizing
user-models in different aspects of evaluation of stream filtering systems.
The MSU measure enables incorporation of different
user models. Furthermore, the applicability of MSU could be extended through
calibration based on user
behavior
Discovering Play Store Reviews Related to Specific Android App Issues
Mobile App reviews may contain information relevant to developers. Developers can investigate these reviews to see what users of their apps are complaining about. However, the huge volume of incoming reviews is impractical to analyze manually. Existing research that attempts to extract this information suffers from two major issues: supervised machine learning methods are usually pre-trained, and thus, does not provide the developers the freedom to define the app issue they are interested in, whereas unsupervised methods do not guarantee that a particular app issue topic will be discovered.
In this thesis, we attempt to devise a framework that would allow developers to define topics related to app issues at any time, and with minimal effort, discover as many reviews related to the issue as possible. Scalable Continuous Active Learning (S-CAL) is an algorithm that can be used to quickly train a model to retrieve documents with high recall. First, we investigate whether S-CAL can be used as a tool for training models to retrieve reviews about a specific app issue. We also investigate whether a model trained to retrieve reviews about a specific issue for one app can be used to do the same for a separate app facing the same issue. We further investigate transfer learning methods to improve retrieval performance for the separate apps.
Through a series of experiments, we show that S-CAL can be used to quickly train models that can to retrieve reviews about a particular issue. We show that developers can discover relevant information during the process of training the model and that the information discovered is more than the information that can be discovered using keyword search under similar time restrictions. Then, we show that models trained using S-CAL can indeed be reused for retrieving reviews for a separate app and that performing additional training using transfer learning protocols can improve performance for models that performed below expectation.
Finally, we compare the performance of the models trained by S-CAL at retrieving reviews for a separate app against that of two state-of-the-art app review analysis methods one of which uses supervised learning, while the other uses unsupervised learning. We show that at the task of retrieving relevant reviews about a particular topic, models trained by S-CAL consistently outperform existing state-of-the-art methods
Determining the Utility of Key-term Highlighting for High Recall Information Retrieval Systems
High-recall information retrieval (HRIR) is an important tool used in tasks such as electronic discovery ("eDiscovery") and systematic review of medical research. Applications of HRIR often uses a human as its oracle to determine the relevance of immense numbers of documents, which is expensive in both time and money. Various methods for reducing the amount of time spent per assessment and improving the quality of assessors have been proposed to improve these systems.
For this thesis, we examine the method of presenting documents where key-terms are highlighted in place of plain-text document. This is commonly accepted as a positive feature which achieves both of the previously mentioned improvements, but there is currently a lack of empirical evidence to support its effectiveness. We describe an user study in which participants are assigned to one of two variations of a HRIR system (key-term highlighting vs plain-text) with a post task questionnaire. Our results failed to show statistically significant improvement for labelling documents with key-term highlighting over plain-text for any of the measures recall, precision, and F1, but may negatively affect retention of concepts.
Our study provides empirical evidence for how the use of key-term highlighting affects an assessor's abilities to label documents and provides insight into when including this feature may be harmful rather than helpful