40,716 research outputs found
An Analysis of Mixed Initiative and Collaboration in Information-Seeking Dialogues
The ability to engage in mixed-initiative interaction is one of the core
requirements for a conversational search system. How to achieve this is poorly
understood. We propose a set of unsupervised metrics, termed ConversationShape,
that highlights the role each of the conversation participants plays by
comparing the distribution of vocabulary and utterance types. Using
ConversationShape as a lens, we take a closer look at several conversational
search datasets and compare them with other dialogue datasets to better
understand the types of dialogue interaction they represent, either driven by
the information seeker or the assistant. We discover that deviations from the
ConversationShape of a human-human dialogue of the same type is predictive of
the quality of a human-machine dialogue.Comment: SIGIR 2020 short conference pape
A Survey on Asking Clarification Questions Datasets in Conversational Systems
The ability to understand a user's underlying needs is critical for conversational systems, especially with limited input from users in a conversation. Thus, in such a domain, Asking Clarification Questions (ACQs) to reveal users' true intent from their queries or utterances arise as an essential task. However, it is noticeable that a key limitation of the existing ACQs studies is their incomparability, from inconsistent use of data, distinct experimental setups and evaluation strategies. Therefore, in this paper, to assist the development of ACQs techniques, we comprehensively analyse the current ACQs research status, which offers a detailed comparison of publicly available datasets, and discusses the applied evaluation metrics, joined with benchmarks for multiple ACQs-related tasks. In particular, given a thorough analysis of the ACQs task, we discuss a number of corresponding research directions for the investigation of ACQs as well as the development of conversational systems
ConvAI3: Generating Clarifying Questions for Open-Domain Dialogue Systems (ClariQ)
This document presents a detailed description of the challenge on clarifying
questions for dialogue systems (ClariQ). The challenge is organized as part of
the Conversational AI challenge series (ConvAI3) at Search Oriented
Conversational AI (SCAI) EMNLP workshop in 2020. The main aim of the
conversational systems is to return an appropriate answer in response to the
user requests. However, some user requests might be ambiguous. In IR settings
such a situation is handled mainly thought the diversification of the search
result page. It is however much more challenging in dialogue settings with
limited bandwidth. Therefore, in this challenge, we provide a common evaluation
framework to evaluate mixed-initiative conversations. Participants are asked to
rank clarifying questions in an information-seeking conversations. The
challenge is organized in two stages where in Stage 1 we evaluate the
submissions in an offline setting and single-turn conversations. Top
participants of Stage 1 get the chance to have their model tested by human
annotators
BERT with History Answer Embedding for Conversational Question Answering
Conversational search is an emerging topic in the information retrieval
community. One of the major challenges to multi-turn conversational search is
to model the conversation history to answer the current question. Existing
methods either prepend history turns to the current question or use complicated
attention mechanisms to model the history. We propose a conceptually simple yet
highly effective approach referred to as history answer embedding. It enables
seamless integration of conversation history into a conversational question
answering (ConvQA) model built on BERT (Bidirectional Encoder Representations
from Transformers). We first explain our view that ConvQA is a simplified but
concrete setting of conversational search, and then we provide a general
framework to solve ConvQA. We further demonstrate the effectiveness of our
approach under this framework. Finally, we analyze the impact of different
numbers of history turns under different settings to provide new insights into
conversation history modeling in ConvQA.Comment: Accepted to SIGIR 2019 as a short pape
Recommended from our members
Neural Approaches to Feedback in Information Retrieval
Relevance feedback on search results indicates users\u27 search intent and preferences. Extensive studies have shown that incorporating relevance feedback (RF) on the top k (usually 10) ranked results significantly improves the performance of re-ranking. However, most existing research on user feedback focuses on words-based retrieval models. Recently, neural retrieval models have shown their efficacy in capturing relevance matching in retrieval but little research has been conducted on neural approaches to feedback. This leads us to study different aspects of feedback with neural approaches in the dissertation.
RF techniques are seldom used in real search scenarios since they can require significant manual efforts to obtain explicit judgments for search results. However, with mobile or voice-based intelligent assistants being more popular nowadays, user feedback of result quality could be collected potentially during their interactions with the assistants. We study both positive and negative RF to refine the re-ranking performance. Positive feedback aims to find more relevant results given some known relevant results while negative feedback targets identifying the first relevant result. In most cases, it is more beneficial to find the first relevant result compared with finding additional relevant results. However, negative feedback is much more challenging than positive feedback since relevant results are usually similar while non-relevant results could vary considerably.
We focus on the tasks of text retrieval and product search to study the different aspects of incorporating feedback for ranking refinement with neural approaches. Our contributions are: (1) we show that iterative relevance feedback (IRF) is more effective than top-k RF on answer passages and we further improve IRF with neural approaches; (2) we propose an effective RF technique based on neural models for product search; (3) we study how to refine re-ranking with negative feedback for conversational product search; (4) we leverage negative feedback in user responses to ask clarifying questions in open-domain conversational search. Our research improves retrieval performance by incorporating feedback in interactive retrieval and approaches multi-turn conversational information-seeking tasks with a focus on positive and negative feedback
User Intent Prediction in Information-seeking Conversations
Conversational assistants are being progressively adopted by the general
population. However, they are not capable of handling complicated
information-seeking tasks that involve multiple turns of information exchange.
Due to the limited communication bandwidth in conversational search, it is
important for conversational assistants to accurately detect and predict user
intent in information-seeking conversations. In this paper, we investigate two
aspects of user intent prediction in an information-seeking setting. First, we
extract features based on the content, structural, and sentiment
characteristics of a given utterance, and use classic machine learning methods
to perform user intent prediction. We then conduct an in-depth feature
importance analysis to identify key features in this prediction task. We find
that structural features contribute most to the prediction performance. Given
this finding, we construct neural classifiers to incorporate context
information and achieve better performance without feature engineering. Our
findings can provide insights into the important factors and effective methods
of user intent prediction in information-seeking conversations.Comment: Accepted to CHIIR 201
- …