614 research outputs found
Predictive Engagement: An Efficient Metric For Automatic Evaluation of Open-Domain Dialogue Systems
User engagement is a critical metric for evaluating the quality of
open-domain dialogue systems. Prior work has focused on conversation-level
engagement by using heuristically constructed features such as the number of
turns and the total time of the conversation. In this paper, we investigate the
possibility and efficacy of estimating utterance-level engagement and define a
novel metric, {\em predictive engagement}, for automatic evaluation of
open-domain dialogue systems. Our experiments demonstrate that (1) human
annotators have high agreement on assessing utterance-level engagement scores;
(2) conversation-level engagement scores can be predicted from properly
aggregated utterance-level engagement scores. Furthermore, we show that the
utterance-level engagement scores can be learned from data. These scores can
improve automatic evaluation metrics for open-domain dialogue systems, as shown
by correlation with human judgements. This suggests that predictive engagement
can be used as a real-time feedback for training better dialogue models
A Neural Network Approach to Context-Sensitive Generation of Conversational Responses
We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines.Comment: A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-20
Dynamic Planning in Open-Ended Dialogue using Reinforcement Learning
Despite recent advances in natural language understanding and generation, and
decades of research on the development of conversational bots, building
automated agents that can carry on rich open-ended conversations with humans
"in the wild" remains a formidable challenge. In this work we develop a
real-time, open-ended dialogue system that uses reinforcement learning (RL) to
power a bot's conversational skill at scale. Our work pairs the succinct
embedding of the conversation state generated using SOTA (supervised) language
models with RL techniques that are particularly suited to a dynamic action
space that changes as the conversation progresses. Trained using crowd-sourced
data, our novel system is able to substantially exceeds the (strong) baseline
supervised model with respect to several metrics of interest in a live
experiment with real users of the Google Assistant
X-Eval: Generalizable Multi-aspect Text Evaluation via Augmented Instruction Tuning with Auxiliary Evaluation Aspects
Natural Language Generation (NLG) typically involves evaluating the generated
text in various aspects (e.g., consistency and naturalness) to obtain a
comprehensive assessment. However, multi-aspect evaluation remains challenging
as it may require the evaluator to generalize to any given evaluation aspect
even if it's absent during training. In this paper, we introduce X-Eval, a
two-stage instruction tuning framework to evaluate the text in both seen and
unseen aspects customized by end users. X-Eval consists of two learning stages:
the vanilla instruction tuning stage that improves the model's ability to
follow evaluation instructions, and an enhanced instruction tuning stage that
exploits the connections between fine-grained evaluation aspects to better
assess text quality. To support the training of X-Eval, we collect
AspectInstruct, the first instruction tuning dataset tailored for multi-aspect
NLG evaluation spanning 27 diverse evaluation aspects with 65 tasks. To enhance
task diversity, we devise an augmentation strategy that converts human rating
annotations into diverse forms of NLG evaluation tasks, including scoring,
comparison, ranking, and Boolean question answering. Extensive experiments
across three essential categories of NLG tasks: dialogue generation,
summarization, and data-to-text coupled with 21 aspects in meta-evaluation,
demonstrate that our X-Eval enables even a lightweight language model to
achieve a comparable if not higher correlation with human judgments compared to
the state-of-the-art NLG evaluators, such as GPT-4.Comment: 17 pages, 5 figures, 14 table
A multilingual neural coaching model with enhanced long-term dialogue structure
In this work we develop a fully data-driven conversational agent capable of carrying out motivational coach-
ing sessions in Spanish, French, Norwegian, and English. Unlike the majority of coaching, and in general
well-being related conversational agents that can be found in the literature, ours is not designed by hand-
crafted rules. Instead, we directly model the coaching strategy of professionals with end users. To this end,
we gather a set of virtual coaching sessions through a Wizard of Oz platform, and apply state of the art Natural Language Processing techniques. We employ a transfer learning approach, pretraining GPT2 neural language models and fine-tuning them on our corpus. However, since these only take as input a local dialogue history, a simple fine-tuning procedure is not capable of modeling the long-term dialogue strategies that appear in coaching sessions. To alleviate this issue, we first propose to learn dialogue phase and scenario embeddings in the fine-tuning stage. These indicate to the model at which part of the dialogue it is and which kind of coaching session it is carrying out. Second, we develop a global deep learning system which controls the long-term structure of the dialogue. We also show that this global module can be used to visualize and interpret the decisions taken by the the conversational agent, and that the learnt representations are comparable to dialogue acts. Automatic and human evaluation show that our proposals serve to improve the baseline models. Finally, interaction experiments with coaching experts indicate that the system is usable and gives rise to positive emotions in Spanish, French and English, while the results in Norwegian point out that there is still work to be done in fully data driven approaches with very low resource languages.This work has been partially funded by the Basque Government under grant PRE_2017_1_0357 and by the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 769872
- …