12,666 research outputs found
Conversational Machine Comprehension: a Literature Review
Conversational Machine Comprehension (CMC), a research track in
conversational AI, expects the machine to understand an open-domain natural
language text and thereafter engage in a multi-turn conversation to answer
questions related to the text. While most of the research in Machine Reading
Comprehension (MRC) revolves around single-turn question answering (QA),
multi-turn CMC has recently gained prominence, thanks to the advancement in
natural language understanding via neural language models such as BERT and the
introduction of large-scale conversational datasets such as CoQA and QuAC. The
rise in interest has, however, led to a flurry of concurrent publications, each
with a different yet structurally similar modeling approach and an inconsistent
view of the surrounding literature. With the volume of model submissions to
conversational datasets increasing every year, there exists a need to
consolidate the scattered knowledge in this domain to streamline future
research. This literature review attempts at providing a holistic overview of
CMC with an emphasis on the common trends across recently published models,
specifically in their approach to tackling conversational history. The review
synthesizes a generic framework for CMC models while highlighting the
differences in recent approaches and intends to serve as a compendium of CMC
for future researchers.Comment: Accepted to COLING 202
Neural Response Ranking for Social Conversation: A Data-Efficient Approach
The overall objective of 'social' dialogue systems is to support engaging,
entertaining, and lengthy conversations on a wide variety of topics, including
social chit-chat. Apart from raw dialogue data, user-provided ratings are the
most common signal used to train such systems to produce engaging responses. In
this paper we show that social dialogue systems can be trained effectively from
raw unannotated data. Using a dataset of real conversations collected in the
2017 Alexa Prize challenge, we developed a neural ranker for selecting 'good'
system responses to user utterances, i.e. responses which are likely to lead to
long and engaging conversations. We show that (1) our neural ranker
consistently outperforms several strong baselines when trained to optimise for
user ratings; (2) when trained on larger amounts of data and only using
conversation length as the objective, the ranker performs better than the one
trained using ratings -- ultimately reaching a Precision@1 of 0.87. This
advance will make data collection for social conversational agents simpler and
less expensive in the future.Comment: 2018 EMNLP Workshop SCAI: The 2nd International Workshop on
Search-Oriented Conversational AI. Brussels, Belgium, October 31, 201
- …