28,048 research outputs found
Goal-oriented Dialogue Policy Learning from Failures
Reinforcement learning methods have been used for learning dialogue policies.
However, learning an effective dialogue policy frequently requires
prohibitively many conversations. This is partly because of the sparse rewards
in dialogues, and the very few successful dialogues in early learning phase.
Hindsight experience replay (HER) enables learning from failures, but the
vanilla HER is inapplicable to dialogue learning due to the implicit goals. In
this work, we develop two complex HER methods providing different trade-offs
between complexity and performance, and, for the first time, enabled HER-based
dialogue policy learning. Experiments using a realistic user simulator show
that our HER methods perform better than existing experience replay methods (as
applied to deep Q-networks) in learning rate
Mapping Instructions and Visual Observations to Actions with Reinforcement Learning
We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants.Comment: In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 201
JoTR: A Joint Transformer and Reinforcement Learning Framework for Dialog Policy Learning
Dialogue policy learning (DPL) is a crucial component of dialogue modelling.
Its primary role is to determine the appropriate abstract response, commonly
referred to as the "dialogue action". Traditional DPL methodologies have
treated this as a sequential decision problem, using pre-defined action
candidates extracted from a corpus. However, these incomplete candidates can
significantly limit the diversity of responses and pose challenges when dealing
with edge cases, which are scenarios that occur only at extreme operating
parameters. To address these limitations, we introduce a novel framework, JoTR.
This framework is unique as it leverages a text-to-text Transformer-based model
to generate flexible dialogue actions. Unlike traditional methods, JoTR
formulates a word-level policy that allows for a more dynamic and adaptable
dialogue action generation, without the need for any action templates. This
setting enhances the diversity of responses and improves the system's ability
to handle edge cases effectively. In addition, JoTR employs reinforcement
learning with a reward-shaping mechanism to efficiently finetune the word-level
dialogue policy, which allows the model to learn from its interactions,
improving its performance over time. We conducted an extensive evaluation of
JoTR to assess its effectiveness. Our extensive evaluation shows that JoTR
achieves state-of-the-art performance on two benchmark dialogue modelling
tasks, as assessed by both user simulators and human evaluators.Comment: Our code, models and other related resources are publicly available
at https://github.com/KwanWaiChung/JoT
Causal-aware Safe Policy Improvement for Task-oriented dialogue
The recent success of reinforcement learning's (RL) in solving complex tasks
is most often attributed to its capacity to explore and exploit an environment
where it has been trained. Sample efficiency is usually not an issue since
cheap simulators are available to sample data on-policy. On the other hand,
task oriented dialogues are usually learnt from offline data collected using
human demonstrations. Collecting diverse demonstrations and annotating them is
expensive. Unfortunately, use of RL methods trained on off-policy data are
prone to issues of bias and generalization, which are further exacerbated by
stochasticity in human response and non-markovian belief state of a dialogue
management system. To this end, we propose a batch RL framework for task
oriented dialogue policy learning: causal aware safe policy improvement
(CASPI). This method gives guarantees on dialogue policy's performance and also
learns to shape rewards according to intentions behind human responses, rather
than just mimicking demonstration data; this couple with batch-RL helps overall
with sample efficiency of the framework. We demonstrate the effectiveness of
this framework on a dialogue-context-to-text Generation and end-to-end dialogue
task of the Multiwoz2.0 dataset. The proposed method outperforms the current
state of the art on these metrics, in both case. In the end-to-end case, our
method trained only on 10\% of the data was able to out perform current state
in three out of four evaluation metrics
- …