28 research outputs found
Personalizing Dialogue Agents via Meta-Learning
Existing personalized dialogue models use human designed persona descriptions
to improve dialogue consistency. Collecting such descriptions from existing
dialogues is expensive and requires hand-crafted feature designs. In this
paper, we propose to extend Model-Agnostic Meta-Learning (MAML)(Finn et al.,
2017) to personalized dialogue learning without using any persona descriptions.
Our model learns to quickly adapt to new personas by leveraging only a few
dialogue samples collected from the same user, which is fundamentally different
from conditioning the response on the persona descriptions. Empirical results
on Persona-chat dataset (Zhang et al., 2018) indicate that our solution
outperforms non-meta-learning baselines using automatic evaluation metrics, and
in terms of human-evaluated fluency and consistency.Comment: Accepted in ACL 2019. Zhaojiang Lin* and Andrea Madotto* contributed
equally to this wor
Dialogue State Induction Using Neural Latent Variable Models
Dialogue state modules are a useful component in a task-oriented dialogue
system. Traditional methods find dialogue states by manually labeling training
corpora, upon which neural models are trained. However, the labeling process
can be costly, slow, error-prone, and more importantly, cannot cover the vast
range of domains in real-world dialogues for customer service. We propose the
task of dialogue state induction, building two neural latent variable models
that mine dialogue states automatically from unlabeled customer service
dialogue records. Results show that the models can effectively find meaningful
slots. In addition, equipped with induced dialogue states, a state-of-the-art
dialogue system gives better performance compared with not using a dialogue
state module.Comment: IJCAI 202