644 research outputs found
Conversational Question Answering over Passages by Leveraging Word Proximity Networks
Question answering (QA) over text passages is a problem of long-standing
interest in information retrieval. Recently, the conversational setting has
attracted attention, where a user asks a sequence of questions to satisfy her
information needs around a topic. While this setup is a natural one and similar
to humans conversing with each other, it introduces two key research
challenges: understanding the context left implicit by the user in follow-up
questions, and dealing with ad hoc question formulations. In this work, we
demonstrate CROWN (Conversational passage ranking by Reasoning Over Word
Networks): an unsupervised yet effective system for conversational QA with
passage responses, that supports several modes of context propagation over
multiple turns. To this end, CROWN first builds a word proximity network (WPN)
from large corpora to store statistically significant term co-occurrences. At
answering time, passages are ranked by a combination of their similarity to the
question, and coherence of query terms within: these factors are measured by
reading off node and edge weights from the WPN. CROWN provides an interface
that is both intuitive for end-users, and insightful for experts for
reconfiguration to individual setups. CROWN was evaluated on TREC CAsT data,
where it achieved above-median performance in a pool of neural methods.Comment: SIGIR 2020 Demonstration
Question Answering over Curated and Open Web Sources
The last few years have seen an explosion of research on the topic of automated question answering (QA), spanning the communities of information retrieval, natural language processing, and artificial intelligence. This tutorial would cover the highlights of this really active period of growth for QA to give the audience a grasp over the families of algorithms that are currently being used. We partition research contributions by the underlying source from where answers are retrieved: curated knowledge graphs, unstructured text, or hybrid corpora. We choose this dimension of partitioning as it is the most discriminative when it comes to algorithm design. Other key dimensions are covered within each sub-topic: like the complexity of questions addressed, and degrees of explainability and interactivity introduced in the systems. We would conclude the tutorial with the most promising emerging trends in the expanse of QA, that would help new entrants into this field make the best decisions to take the community forward. Much has changed in the community since the last tutorial on QA in SIGIR 2016, and we believe that this timely overview will indeed benefit a large number of conference participants
Conversational Question Answering on Heterogeneous Sources
Conversational question answering (ConvQA) tackles sequential informationneeds where contexts in follow-up questions are left implicit. Current ConvQAsystems operate over homogeneous sources of information: either a knowledgebase (KB), or a text corpus, or a collection of tables. This paper addressesthe novel issue of jointly tapping into all of these together, this wayboosting answer coverage and confidence. We present CONVINSE, an end-to-endpipeline for ConvQA over heterogeneous sources, operating in three stages: i)learning an explicit structured representation of an incoming question and itsconversational context, ii) harnessing this frame-like representation touniformly capture relevant evidences from KB, text, and tables, and iii)running a fusion-in-decoder model to generate the answer. We construct andrelease the first benchmark, ConvMix, for ConvQA over heterogeneous sources,comprising 3000 real-user conversations with 16000 questions, along with entityannotations, completed question utterances, and question paraphrases.Experiments demonstrate the viability and advantages of our method, compared tostate-of-the-art baselines.<br
Information Retrieval: Recent Advances and Beyond
In this paper, we provide a detailed overview of the models used for
information retrieval in the first and second stages of the typical processing
chain. We discuss the current state-of-the-art models, including methods based
on terms, semantic retrieval, and neural. Additionally, we delve into the key
topics related to the learning process of these models. This way, this survey
offers a comprehensive understanding of the field and is of interest for for
researchers and practitioners entering/working in the information retrieval
domain
Recommended from our members
Neural Approaches to Feedback in Information Retrieval
Relevance feedback on search results indicates users\u27 search intent and preferences. Extensive studies have shown that incorporating relevance feedback (RF) on the top k (usually 10) ranked results significantly improves the performance of re-ranking. However, most existing research on user feedback focuses on words-based retrieval models. Recently, neural retrieval models have shown their efficacy in capturing relevance matching in retrieval but little research has been conducted on neural approaches to feedback. This leads us to study different aspects of feedback with neural approaches in the dissertation.
RF techniques are seldom used in real search scenarios since they can require significant manual efforts to obtain explicit judgments for search results. However, with mobile or voice-based intelligent assistants being more popular nowadays, user feedback of result quality could be collected potentially during their interactions with the assistants. We study both positive and negative RF to refine the re-ranking performance. Positive feedback aims to find more relevant results given some known relevant results while negative feedback targets identifying the first relevant result. In most cases, it is more beneficial to find the first relevant result compared with finding additional relevant results. However, negative feedback is much more challenging than positive feedback since relevant results are usually similar while non-relevant results could vary considerably.
We focus on the tasks of text retrieval and product search to study the different aspects of incorporating feedback for ranking refinement with neural approaches. Our contributions are: (1) we show that iterative relevance feedback (IRF) is more effective than top-k RF on answer passages and we further improve IRF with neural approaches; (2) we propose an effective RF technique based on neural models for product search; (3) we study how to refine re-ranking with negative feedback for conversational product search; (4) we leverage negative feedback in user responses to ask clarifying questions in open-domain conversational search. Our research improves retrieval performance by incorporating feedback in interactive retrieval and approaches multi-turn conversational information-seeking tasks with a focus on positive and negative feedback
{UNIQORN}: {U}nified Question Answering over {RDF} Knowledge Graphs and Natural Language Text
Question answering over knowledge graphs and other RDF data has been greatly advanced, with a number of good systems providing crisp answers for natural language questions or telegraphic queries. Some of these systems incorporate textual sources as additional evidence for the answering process, but cannot compute answers that are present in text alone. Conversely, systems from the IR and NLP communities have addressed QA over text, but barely utilize semantic data and knowledge. This paper presents the first QA system that can seamlessly operate over RDF datasets and text corpora, or both together, in a unified framework. Our method, called UNIQORN, builds a context graph on the fly, by retrieving question-relevant triples from the RDF data and/or the text corpus, where the latter case is handled by automatic information extraction. The resulting graph is typically rich but highly noisy. UNIQORN copes with this input by advanced graph algorithms for Group Steiner Trees, that identify the best answer candidates in the context graph. Experimental results on several benchmarks of complex questions with multiple entities and relations, show that UNIQORN, an unsupervised method with only five parameters, produces results comparable to the state-of-the-art on KGs, text corpora, and heterogeneous sources. The graph-based methodology provides user-interpretable evidence for the complete answering process
- …