79 research outputs found
Recommended from our members
A words-of-interest model of sketch representation for image retrieval
In this paper we propose a method for sketch-based image retrieval. Sketch is a magical medium which is capable of conveying semantic messages for user. It’s in accordance with user’s cognitive psychology to retrieve images with sketch. In order to narrow down the semantic gap between the user and the images in database, we preprocess all the images into sketches by the coherent line drawing algorithm. During the process of sketches extraction, saliency maps are used to filter out the redundant background information, while preserve the important semantic information. We use a variant of Words-of-Interest model to retrieve relevant images for the user according to the query. Words-of-Interest (WoI) model is based on Bag-ofvisual Words (BoW) model, which has been proven successfully for information retrieval. Bag-of-Words ignores the spatial relationships among visual words, which are important for sketch representation. Our method takes advantage of the spatial information of the query to select words of interest. Experimental results demonstrate that our sketch-based retrieval method achieves a good tradeoff between retrieval accuracy and semantic representation of users’ query
Result Diversification in Search and Recommendation: A Survey
Diversifying return results is an important research topic in retrieval
systems in order to satisfy both the various interests of customers and the
equal market exposure of providers. There has been growing attention on
diversity-aware research during recent years, accompanied by a proliferation of
literature on methods to promote diversity in search and recommendation.
However, diversity-aware studies in retrieval systems lack a systematic
organization and are rather fragmented. In this survey, we are the first to
propose a unified taxonomy for classifying the metrics and approaches of
diversification in both search and recommendation, which are two of the most
extensively researched fields of retrieval systems. We begin the survey with a
brief discussion of why diversity is important in retrieval systems, followed
by a summary of the various diversity concerns in search and recommendation,
highlighting their relationship and differences. For the survey's main body, we
present a unified taxonomy of diversification metrics and approaches in
retrieval systems, from both the search and recommendation perspectives. In the
later part of the survey, we discuss the open research questions of
diversity-aware research in search and recommendation in an effort to inspire
future innovations and encourage the implementation of diversity in real-world
systems.Comment: 20 page
Cheap IR Evaluation: Fewer Topics, No Relevance Judgements, and Crowdsourced Assessments
To evaluate Information Retrieval (IR) effectiveness, a possible approach is
to use test collections, which are composed of a collection of documents, a set
of description of information needs (called topics), and a set of relevant
documents to each topic. Test collections are modelled in a competition
scenario: for example, in the well known TREC initiative, participants run
their own retrieval systems over a set of topics and they provide a ranked list
of retrieved documents; some of the retrieved documents (usually the first
ranked) constitute the so called pool, and their relevance is evaluated by
human assessors; the document list is then used to compute effectiveness
metrics and rank the participant systems. Private Web Search companies also run
their in-house evaluation exercises; although the details are mostly unknown,
and the aims are somehow different, the overall approach shares several issues
with the test collection approach.
The aim of this work is to: (i) develop and improve some state-of-the-art
work on the evaluation of IR effectiveness while saving resources, and (ii)
propose a novel, more principled and engineered, overall approach to test
collection based effectiveness evaluation.
[...
ANSWERING TOPICAL INFORMATION NEEDS USING NEURAL ENTITY-ORIENTED INFORMATION RETRIEVAL AND EXTRACTION
In the modern world, search engines are an integral part of human lives. The field of Information Retrieval (IR) is concerned with finding material (usually documents) of an unstructured nature (usually text) that satisfies an information need (query) from within large collections (usually stored on computers). The search engine then displays a ranked list of results relevant to our query. Traditional document retrieval algorithms match a query to a document using the overlap of words in both. However, the last decade has seen the focus shifting to leveraging the rich semantic information available in the form of entities. Entities are uniquely identifiable objects or things such as places, events, diseases, etc. that exist in the real or fictional world. Entity-oriented search systems leverage the semantic information associated with entities (e.g., names, types, etc.) to better match documents to queries. Web search engines would provide better search results if they understand the meaning of a query.
This dissertation advances the state-of-the-art in IR by developing novel algorithmsthat understand text (query, document, question, sentence, etc.) at the semantic level. To this end, this dissertation aims to understand the fine-grained meaning of entities from the context in which the entities have been mentioned, for example, “oysters” in the context of food versus ecosystems. Further, we aim to automatically learn (vector) representations of entities that incorporate this fine-grained knowledge and knowledge about the query. This work refines the automatic understanding of text passages using deep learning, a modern artificial intelligence paradigm.
This dissertation utilized the semantic information extracted from entities to retrieve materials (text and entities) relevant to a query. The interplay between text and entities in the text is studied by addressing three related prediction problems: (1) Identify entities that are relevant for the query, (2) Understand an entity’s meaning in the context of the query, and (3) Identify text passages that elaborate the connection between the query and an entity.
The research presented in this dissertation may be integrated into a larger system de-signed for answering complex topical queries such as dark chocolate health benefits which require the search engine to automatically understand the connections between the query and the relevant material, thus transforming the search engine into an answering engine
Selecting which Dense Retriever to use for Zero-Shot Search
We propose the new problem of choosing which dense retrieval model to use
when searching on a new collection for which no labels are available, i.e. in a
zero-shot setting. Many dense retrieval models are readily available. Each
model however is characterized by very differing search effectiveness -- not
just on the test portion of the datasets in which the dense representations
have been learned but, importantly, also across different datasets for which
data was not used to learn the dense representations. This is because dense
retrievers typically require training on a large amount of labeled data to
achieve satisfactory search effectiveness in a specific dataset or domain.
Moreover, effectiveness gains obtained by dense retrievers on datasets for
which they are able to observe labels during training, do not necessarily
generalise to datasets that have not been observed during training. This is
however a hard problem: through empirical experimentation we show that methods
inspired by recent work in unsupervised performance evaluation with the
presence of domain shift in the area of computer vision and machine learning
are not effective for choosing highly performing dense retrievers in our setup.
The availability of reliable methods for the selection of dense retrieval
models in zero-shot settings that do not require the collection of labels for
evaluation would allow to streamline the widespread adoption of dense
retrieval. This is therefore an important new problem we believe the
information retrieval community should consider. Implementation of methods,
along with raw result files and analysis scripts are made publicly available at
https://www.github.com/anonymized
Unconfounded Propensity Estimation for Unbiased Ranking
The goal of unbiased learning to rank (ULTR) is to leverage implicit user
feedback for optimizing learning-to-rank systems. Among existing solutions,
automatic ULTR algorithms that jointly learn user bias models (i.e., propensity
models) with unbiased rankers have received a lot of attention due to their
superior performance and low deployment cost in practice. Despite their
theoretical soundness, the effectiveness is usually justified under a weak
logging policy, where the ranking model can barely rank documents according to
their relevance to the query. However, when the logging policy is strong, e.g.,
an industry-deployed ranking policy, the reported effectiveness cannot be
reproduced. In this paper, we first investigate ULTR from a causal perspective
and uncover a negative result: existing ULTR algorithms fail to address the
issue of propensity overestimation caused by the query-document relevance
confounder. Then, we propose a new learning objective based on backdoor
adjustment and highlight its differences from conventional propensity models,
which reveal the prevalence of propensity overestimation. On top of that, we
introduce a novel propensity model called Logging-Policy-aware Propensity (LPP)
model and its distinctive two-step optimization strategy, which allows for the
joint learning of LPP and ranking models within the automatic ULTR framework,
and actualize the unconfounded propensity estimation for ULTR. Extensive
experiments on two benchmarks demonstrate the effectiveness and
generalizability of the proposed method.Comment: 11 pages, 5 figure
On Multilabel Classification Methods of Incompletely Labeled Biomedical Text Data
Multilabel classification is often hindered by incompletely labeled training datasets; for some items of such dataset (or even for all of them) some labels may be omitted. In this case, we cannot know if any item is labeled fully and correctly. When we train a classifier directly on incompletely labeled dataset, it performs ineffectively. To overcome the problem, we added an extra step, training set modification, before training a classifier. In this paper, we try two algorithms for training set modification: weighted k-nearest neighbor (WkNN) and soft supervised learning (SoftSL). Both of these approaches are based on similarity measurements between data vectors. We performed the experiments on AgingPortfolio (text dataset) and then rechecked on the Yeast (nontext genetic data). We tried SVM and RF classifiers for the original datasets and then for the modified ones. For each dataset, our experiments demonstrated that both classification algorithms performed considerably better when preceded by the training set modification step
Perspectives on Large Language Models for Relevance Judgment
When asked, current large language models (LLMs) like ChatGPT claim that they
can assist us with relevance judgments. Many researchers think this would not
lead to credible IR research. In this perspective paper, we discuss possible
ways for LLMs to assist human experts along with concerns and issues that
arise. We devise a human-machine collaboration spectrum that allows
categorizing different relevance judgment strategies, based on how much the
human relies on the machine. For the extreme point of "fully automated
assessment", we further include a pilot experiment on whether LLM-based
relevance judgments correlate with judgments from trained human assessors. We
conclude the paper by providing two opposing perspectives - for and against the
use of LLMs for automatic relevance judgments - and a compromise perspective,
informed by our analyses of the literature, our preliminary experimental
evidence, and our experience as IR researchers.
We hope to start a constructive discussion within the community to avoid a
stale-mate during review, where work is dammed if is uses LLMs for evaluation
and dammed if it doesn't
Report from Dagstuhl Seminar 23031: Frontiers of Information Access Experimentation for Research and Education
This report documents the program and the outcomes of Dagstuhl Seminar 23031
``Frontiers of Information Access Experimentation for Research and Education'',
which brought together 37 participants from 12 countries.
The seminar addressed technology-enhanced information access (information
retrieval, recommender systems, natural language processing) and specifically
focused on developing more responsible experimental practices leading to more
valid results, both for research as well as for scientific education.
The seminar brought together experts from various sub-fields of information
access, namely IR, RS, NLP, information science, and human-computer interaction
to create a joint understanding of the problems and challenges presented by
next generation information access systems, from both the research and the
experimentation point of views, to discuss existing solutions and impediments,
and to propose next steps to be pursued in the area in order to improve not
also our research methods and findings but also the education of the new
generation of researchers and developers.
The seminar featured a series of long and short talks delivered by
participants, who helped in setting a common ground and in letting emerge
topics of interest to be explored as the main output of the seminar. This led
to the definition of five groups which investigated challenges, opportunities,
and next steps in the following areas: reality check, i.e. conducting
real-world studies, human-machine-collaborative relevance judgment frameworks,
overcoming methodological challenges in information retrieval and recommender
systems through awareness and education, results-blind reviewing, and guidance
for authors.Comment: Dagstuhl Seminar 23031, report
- …