5 research outputs found
Dynamic Planning in Open-Ended Dialogue using Reinforcement Learning
Despite recent advances in natural language understanding and generation, and
decades of research on the development of conversational bots, building
automated agents that can carry on rich open-ended conversations with humans
"in the wild" remains a formidable challenge. In this work we develop a
real-time, open-ended dialogue system that uses reinforcement learning (RL) to
power a bot's conversational skill at scale. Our work pairs the succinct
embedding of the conversation state generated using SOTA (supervised) language
models with RL techniques that are particularly suited to a dynamic action
space that changes as the conversation progresses. Trained using crowd-sourced
data, our novel system is able to substantially exceeds the (strong) baseline
supervised model with respect to several metrics of interest in a live
experiment with real users of the Google Assistant
Offline Reinforcement Learning for Mixture-of-Expert Dialogue Management
Reinforcement learning (RL) has shown great promise for developing dialogue
management (DM) agents that are non-myopic, conduct rich conversations, and
maximize overall user satisfaction. Despite recent developments in RL and
language models (LMs), using RL to power conversational chatbots remains
challenging, in part because RL requires online exploration to learn
effectively, whereas collecting novel human-bot interactions can be expensive
and unsafe. This issue is exacerbated by the combinatorial action spaces facing
these algorithms, as most LM agents generate responses at the word level. We
develop a variety of RL algorithms, specialized to dialogue planning, that
leverage recent Mixture-of-Expert Language Models (MoE-LMs) -- models that
capture diverse semantics, generate utterances reflecting different intents,
and are amenable for multi-turn DM. By exploiting MoE-LM structure, our methods
significantly reduce the size of the action space and improve the efficacy of
RL-based DM. We evaluate our methods in open-domain dialogue to demonstrate
their effectiveness w.r.t.\ the diversity of intent in generated utterances and
overall DM performance.Comment: Thirty-seventh Conference on Neural Information Processing Systems
(NeurIPS 2023