5 research outputs found
Query Resolution for Conversational Search with Limited Supervision
In this work we focus on multi-turn passage retrieval as a crucial component
of conversational search. One of the key challenges in multi-turn passage
retrieval comes from the fact that the current turn query is often
underspecified due to zero anaphora, topic change, or topic return. Context
from the conversational history can be used to arrive at a better expression of
the current turn query, defined as the task of query resolution. In this paper,
we model the query resolution task as a binary term classification problem: for
each term appearing in the previous turns of the conversation decide whether to
add it to the current turn query or not. We propose QuReTeC (Query Resolution
by Term Classification), a neural query resolution model based on bidirectional
transformers. We propose a distant supervision method to automatically generate
training data by using query-passage relevance labels. Such labels are often
readily available in a collection either as human annotations or inferred from
user interactions. We show that QuReTeC outperforms state-of-the-art models,
and furthermore, that our distant supervision method can be used to
substantially reduce the amount of human-curated data required to train
QuReTeC. We incorporate QuReTeC in a multi-turn, multi-stage passage retrieval
architecture and demonstrate its effectiveness on the TREC CAsT dataset.Comment: SIGIR 2020 full conference pape
Look before you Hop: Conversational Question Answering over Knowledge Graphs Using Judicious Context Expansion
Fact-centric information needs are rarely one-shot; users typically ask follow-up questions to explore a topic. In such a conversational setting, the user's inputs are often incomplete, with entities or predicates left out, and ungrammatical phrases. This poses a huge challenge to question answering (QA) systems that typically rely on cues in full-fledged interrogative sentences. As a solution, we develop CONVEX: an unsupervised method that can answer incomplete questions over a knowledge graph (KG) by maintaining conversation context using entities and predicates seen so far and automatically inferring missing or ambiguous pieces for follow-up questions. The core of our method is a graph exploration algorithm that judiciously expands a frontier to find candidate answers for the current question. To evaluate CONVEX, we release ConvQuestions, a crowdsourced benchmark with 11,200 distinct conversations from five different domains. We show that CONVEX: (i) adds conversational support to any stand-alone QA system, and (ii) outperforms state-of-the-art baselines and question completion strategies
FANDA: A Novel Approach to Perform Follow-up Query Analysis
Recent work on Natural Language Interfaces to Databases (NLIDB) has attracted
considerable attention. NLIDB allow users to search databases using natural
language instead of SQL-like query languages. While saving the users from
having to learn query languages, multi-turn interaction with NLIDB usually
involves multiple queries where contextual information is vital to understand
the users' query intents. In this paper, we address a typical contextual
understanding problem, termed as follow-up query analysis. In spite of its
ubiquity, follow-up query analysis has not been well studied due to two primary
obstacles: the multifarious nature of follow-up query scenarios and the lack of
high-quality datasets. Our work summarizes typical follow-up query scenarios
and provides a new FollowUp dataset with query triples on 120 tables.
Moreover, we propose a novel approach FANDA, which takes into account the
structures of queries and employs a ranking model with weakly supervised
max-margin learning. The experimental results on FollowUp demonstrate the
superiority of FANDA over multiple baselines across multiple metrics.Comment: Accepted by AAAI 201
Classification and Resolution of Non-Sentential Utterances in Dialogue
This article addresses the problems of classification and resolution of non-sentential utterances (NSUs) in dialogue. NSUs are utterances that do not have a complete sentential form but convey a full clausal meaning given the conversational context, such as “To the contrary!” or “How much?”. The presented approach builds upon the work of Fernández, Ginzburg, and Lappin (2007), who provide a taxonomy of NSUs divided in 15 classes along with a small annotated corpus extracted from dialogue transcripts. The main part of this article focuses on the automatic classification of NSUs according to these classes. We show that a combination of novel linguistic features and active learning techniques yields a significant improvement in the classification accuracy over the state-of-the-art, and is able to mitigate the scarcity of labelled data. Based on this classifier, the article also presents a novel approach for the semantic resolution of NSUs in context using probabilistic rules