14,538 research outputs found
Deep Reinforcement Learning for Dialogue Generation
Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues
Reinforcement Learning of Speech Recognition System Based on Policy Gradient and Hypothesis Selection
Speech recognition systems have achieved high recognition performance for
several tasks. However, the performance of such systems is dependent on the
tremendously costly development work of preparing vast amounts of task-matched
transcribed speech data for supervised training. The key problem here is the
cost of transcribing speech data. The cost is repeatedly required to support
new languages and new tasks. Assuming broad network services for transcribing
speech data for many users, a system would become more self-sufficient and more
useful if it possessed the ability to learn from very light feedback from the
users without annoying them. In this paper, we propose a general reinforcement
learning framework for speech recognition systems based on the policy gradient
method. As a particular instance of the framework, we also propose a hypothesis
selection-based reinforcement learning method. The proposed framework provides
a new view for several existing training and adaptation methods. The
experimental results show that the proposed method improves the recognition
performance compared to unsupervised adaptation.Comment: 5 pages, 6 figure
Evaluation of a hierarchical reinforcement learning spoken dialogue system
We describe an evaluation of spoken dialogue strategies designed using hierarchical reinforcement learning agents. The dialogue strategies were learnt in a simulated environment and tested in a laboratory setting with 32 users. These dialogues were used to evaluate three types of machine dialogue behaviour: hand-coded, fully-learnt and semi-learnt. These experiments also served to evaluate the realism of simulated dialogues using two proposed metrics contrasted with ‘Precision-Recall’. The learnt dialogue behaviours used the Semi-Markov Decision Process (SMDP) model, and we report the first evaluation of this model in a realistic conversational environment. Experimental results in the travel planning domain provide evidence to support the following claims: (a) hierarchical semi-learnt dialogue agents are a better alternative (with higher overall performance) than deterministic or fully-learnt behaviour; (b) spoken dialogue strategies learnt with highly coherent user behaviour and conservative recognition error rates (keyword error rate of 20%) can outperform a reasonable hand-coded strategy; and (c) hierarchical reinforcement learning dialogue agents are feasible and promising for the (semi) automatic design of optimized dialogue behaviours in larger-scale systems
A retrieval-based dialogue system utilizing utterance and context embeddings
Finding semantically rich and computer-understandable representations for
textual dialogues, utterances and words is crucial for dialogue systems (or
conversational agents), as their performance mostly depends on understanding
the context of conversations. Recent research aims at finding distributed
vector representations (embeddings) for words, such that semantically similar
words are relatively close within the vector-space. Encoding the "meaning" of
text into vectors is a current trend, and text can range from words, phrases
and documents to actual human-to-human conversations. In recent research
approaches, responses have been generated utilizing a decoder architecture,
given the vector representation of the current conversation. In this paper, the
utilization of embeddings for answer retrieval is explored by using
Locality-Sensitive Hashing Forest (LSH Forest), an Approximate Nearest Neighbor
(ANN) model, to find similar conversations in a corpus and rank possible
candidates. Experimental results on the well-known Ubuntu Corpus (in English)
and a customer service chat dataset (in Dutch) show that, in combination with a
candidate selection method, retrieval-based approaches outperform generative
ones and reveal promising future research directions towards the usability of
such a system.Comment: A shorter version is accepted at ICMLA2017 conference;
acknowledgement added; typos correcte
A Personalized System for Conversational Recommendations
Searching for and making decisions about information is becoming increasingly
difficult as the amount of information and number of choices increases.
Recommendation systems help users find items of interest of a particular type,
such as movies or restaurants, but are still somewhat awkward to use. Our
solution is to take advantage of the complementary strengths of personalized
recommendation systems and dialogue systems, creating personalized aides. We
present a system -- the Adaptive Place Advisor -- that treats item selection as
an interactive, conversational process, with the program inquiring about item
attributes and the user responding. Individual, long-term user preferences are
unobtrusively obtained in the course of normal recommendation dialogues and
used to direct future conversations with the same user. We present a novel user
model that influences both item search and the questions asked during a
conversation. We demonstrate the effectiveness of our system in significantly
reducing the time and number of interactions required to find a satisfactory
item, as compared to a control group of users interacting with a non-adaptive
version of the system
Model-based Bayesian Reinforcement Learning for Dialogue Management
Reinforcement learning methods are increasingly used to optimise dialogue
policies from experience. Most current techniques are model-free: they directly
estimate the utility of various actions, without explicit model of the
interaction dynamics. In this paper, we investigate an alternative strategy
grounded in model-based Bayesian reinforcement learning. Bayesian inference is
used to maintain a posterior distribution over the model parameters, reflecting
the model uncertainty. This parameter distribution is gradually refined as more
data is collected and simultaneously used to plan the agent's actions. Within
this learning framework, we carried out experiments with two alternative
formalisations of the transition model, one encoded with standard multinomial
distributions, and one structured with probabilistic rules. We demonstrate the
potential of our approach with empirical results on a user simulator
constructed from Wizard-of-Oz data in a human-robot interaction scenario. The
results illustrate in particular the benefits of capturing prior domain
knowledge with high-level rules
Multi-session group scenarios for speech interface design
When developing adaptive speech-based multilingual interaction systems, we need representative data on the user's behaviour. In this paper we focus on a data collection method pertaining to adaptation in the user's interaction with the system. We describe a multi-session group scenario for Wizard of Oz studies with two novel features: firstly, instead of doing solo sessions with a static mailbox, our test users communicated with each other in a group of six, and secondly, the communication took place over several sessions in a period of five to eight days. The paper discusses our data collection studies using the method, concentrating on the usefulness of the method in terms of naturalness of the interaction and long-term developments
- …