62 research outputs found
Simulating Users in Interactive Web Table Retrieval
Considering the multimodal signals of search items is beneficial for
retrieval effectiveness. Especially in web table retrieval (WTR) experiments,
accounting for multimodal properties of tables boosts effectiveness. However,
it still remains an open question how the single modalities affect user
experience in particular. Previous work analyzed WTR performance in ad-hoc
retrieval benchmarks, which neglects interactive search behavior and limits the
conclusion about the implications for real-world user environments.
To this end, this work presents an in-depth evaluation of simulated
interactive WTR search sessions as a more cost-efficient and reproducible
alternative to real user studies. As a first of its kind, we introduce
interactive query reformulation strategies based on Doc2Query, incorporating
cognitive states of simulated user knowledge. Our evaluations include two
perspectives on user effectiveness by considering different cost paradigms,
namely query-wise and time-oriented measures of effort. Our multi-perspective
evaluation scheme reveals new insights about query strategies, the impact of
modalities, and different user types in simulated WTR search sessions.Comment: 4 pages + references; accepted at CIKM'2
Evaluation Measures for Relevance and Credibility in Ranked Lists
Recent discussions on alternative facts, fake news, and post truth politics
have motivated research on creating technologies that allow people not only to
access information, but also to assess the credibility of the information
presented to them by information retrieval systems. Whereas technology is in
place for filtering information according to relevance and/or credibility, no
single measure currently exists for evaluating the accuracy or precision (and
more generally effectiveness) of both the relevance and the credibility of
retrieved results. One obvious way of doing so is to measure relevance and
credibility effectiveness separately, and then consolidate the two measures
into one. There at least two problems with such an approach: (I) it is not
certain that the same criteria are applied to the evaluation of both relevance
and credibility (and applying different criteria introduces bias to the
evaluation); (II) many more and richer measures exist for assessing relevance
effectiveness than for assessing credibility effectiveness (hence risking
further bias).
Motivated by the above, we present two novel types of evaluation measures
that are designed to measure the effectiveness of both relevance and
credibility in ranked lists of retrieval results. Experimental evaluation on a
small human-annotated dataset (that we make freely available to the research
community) shows that our measures are expressive and intuitive in their
interpretation
Denmark's Participation in the Search Engine TREC COVID-19 Challenge: Lessons Learned about Searching for Precise Biomedical Scientific Information on COVID-19
This report describes the participation of two Danish universities,
University of Copenhagen and Aalborg University, in the international search
engine competition on COVID-19 (the 2020 TREC-COVID Challenge) organised by the
U.S. National Institute of Standards and Technology (NIST) and its Text
Retrieval Conference (TREC) division. The aim of the competition was to find
the best search engine strategy for retrieving precise biomedical scientific
information on COVID-19 from the largest, at that point in time, dataset of
curated scientific literature on COVID-19 -- the COVID-19 Open Research Dataset
(CORD-19). CORD-19 was the result of a call to action to the tech community by
the U.S. White House in March 2020, and was shortly thereafter posted on Kaggle
as an AI competition by the Allen Institute for AI, the Chan Zuckerberg
Initiative, Georgetown University's Center for Security and Emerging
Technology, Microsoft, and the National Library of Medicine at the US National
Institutes of Health. CORD-19 contained over 200,000 scholarly articles (of
which more than 100,000 were with full text) about COVID-19, SARS-CoV-2, and
related coronaviruses, gathered from curated biomedical sources. The TREC-COVID
challenge asked for the best way to (a) retrieve accurate and precise
scientific information, in response to some queries formulated by biomedical
experts, and (b) rank this information decreasingly by its relevance to the
query.
In this document, we describe the TREC-COVID competition setup, our
participation to it, and our resulting reflections and lessons learned about
the state-of-art technology when faced with the acute task of retrieving
precise scientific information from a rapidly growing corpus of literature, in
response to highly specialised queries, in the middle of a pandemic
- …