109 research outputs found
Keyword-Guided Neural Conversational Model
We study the problem of imposing conversational goals/keywords on open-domain
conversational agents, where the agent is required to lead the conversation to
a target keyword smoothly and fast. Solving this problem enables the
application of conversational agents in many real-world scenarios, e.g.,
recommendation and psychotherapy. The dominant paradigm for tackling this
problem is to 1) train a next-turn keyword classifier, and 2) train a
keyword-augmented response retrieval model. However, existing approaches in
this paradigm have two limitations: 1) the training and evaluation datasets for
next-turn keyword classification are directly extracted from conversations
without human annotations, thus, they are noisy and have low correlation with
human judgements, and 2) during keyword transition, the agents solely rely on
the similarities between word embeddings to move closer to the target keyword,
which may not reflect how humans converse. In this paper, we assume that human
conversations are grounded on commonsense and propose a keyword-guided neural
conversational model that can leverage external commonsense knowledge graphs
(CKG) for both keyword transition and response retrieval. Automatic evaluations
suggest that commonsense improves the performance of both next-turn keyword
prediction and keyword-augmented response retrieval. In addition, both
self-play and human evaluations show that our model produces responses with
smoother keyword transition and reaches the target keyword faster than
competitive baselines.Comment: AAAI-202
Spot The Bot: A Robust and Efficient Framework for the Evaluation of Conversational Dialogue Systems
The lack of time-efficient and reliable evaluation methods hamper the
development of conversational dialogue systems (chatbots). Evaluations
requiring humans to converse with chatbots are time and cost-intensive, put
high cognitive demands on the human judges, and yield low-quality results. In
this work, we introduce \emph{Spot The Bot}, a cost-efficient and robust
evaluation framework that replaces human-bot conversations with conversations
between bots. Human judges then only annotate for each entity in a conversation
whether they think it is human or not (assuming there are humans participants
in these conversations). These annotations then allow us to rank chatbots
regarding their ability to mimic the conversational behavior of humans. Since
we expect that all bots are eventually recognized as such, we incorporate a
metric that measures which chatbot can uphold human-like behavior the longest,
i.e., \emph{Survival Analysis}. This metric has the ability to correlate a
bot's performance to certain of its characteristics (e.g., \ fluency or
sensibleness), yielding interpretable results. The comparably low cost of our
framework allows for frequent evaluations of chatbots during their evaluation
cycle. We empirically validate our claims by applying \emph{Spot The Bot} to
three domains, evaluating several state-of-the-art chatbots, and drawing
comparisons to related work. The framework is released as a ready-to-use tool
- …