935 research outputs found
An End-to-End Trainable Neural Network Model with Belief Tracking for Task-Oriented Dialog
We present a novel end-to-end trainable neural network model for
task-oriented dialog systems. The model is able to track dialog state, issue
API calls to knowledge base (KB), and incorporate structured KB query results
into system responses to successfully complete task-oriented dialogs. The
proposed model produces well-structured system responses by jointly learning
belief tracking and KB result processing conditioning on the dialog history. We
evaluate the model in a restaurant search domain using a dataset that is
converted from the second Dialog State Tracking Challenge (DSTC2) corpus.
Experiment results show that the proposed model can robustly track dialog state
given the dialog history. Moreover, our model demonstrates promising results in
producing appropriate system responses, outperforming prior end-to-end
trainable neural network models using per-response accuracy evaluation metrics.Comment: Published at Interspeech 201
Multi-level Memory for Task Oriented Dialogs
Recent end-to-end task oriented dialog systems use memory architectures to
incorporate external knowledge in their dialogs. Current work makes simplifying
assumptions about the structure of the knowledge base, such as the use of
triples to represent knowledge, and combines dialog utterances (context) as
well as knowledge base (KB) results as part of the same memory. This causes an
explosion in the memory size, and makes the reasoning over memory harder. In
addition, such a memory design forces hierarchical properties of the data to be
fit into a triple structure of memory. This requires the memory reader to infer
relationships across otherwise connected attributes. In this paper we relax the
strong assumptions made by existing architectures and separate memories used
for modeling dialog context and KB results. Instead of using triples to store
KB results, we introduce a novel multi-level memory architecture consisting of
cells for each query and their corresponding results. The multi-level memory
first addresses queries, followed by results and finally each key-value pair
within a result. We conduct detailed experiments on three publicly available
task oriented dialog data sets and we find that our method conclusively
outperforms current state-of-the-art models. We report a 15-25% increase in
both entity F1 and BLEU scores.Comment: Accepted as full paper at NAACL 201
Learning End-to-End Goal-Oriented Dialog with Multiple Answers
In a dialog, there can be multiple valid next utterances at any point. The
present end-to-end neural methods for dialog do not take this into account.
They learn with the assumption that at any time there is only one correct next
utterance. In this work, we focus on this problem in the goal-oriented dialog
setting where there are different paths to reach a goal. We propose a new
method, that uses a combination of supervised learning and reinforcement
learning approaches to address this issue. We also propose a new and more
effective testbed, permuted-bAbI dialog tasks, by introducing multiple valid
next utterances to the original-bAbI dialog tasks, which allows evaluation of
goal-oriented dialog systems in a more realistic setting. We show that there is
a significant drop in performance of existing end-to-end neural methods from
81.5% per-dialog accuracy on original-bAbI dialog tasks to 30.3% on
permuted-bAbI dialog tasks. We also show that our proposed method improves the
performance and achieves 47.3% per-dialog accuracy on permuted-bAbI dialog
tasks.Comment: EMNLP 2018. permuted-bAbI dialog tasks are available at -
https://github.com/IBM/permuted-bAbI-dialog-task
- …