6,587 research outputs found
A Robust Data-Driven Approach for Dialogue State Tracking of Unseen Slot Values
A Dialogue State Tracker is a key component in dialogue systems which
estimates the beliefs of possible user goals at each dialogue turn. Deep
learning approaches using recurrent neural networks have shown state-of-the-art
performance for the task of dialogue state tracking. Generally, these
approaches assume a predefined candidate list and struggle to predict any new
dialogue state values that are not seen during training. This makes extending
the candidate list for a slot without model retaining infeasible and also has
limitations in modelling for low resource domains where training data for slot
values are expensive. In this paper, we propose a novel dialogue state tracker
based on copying mechanism that can effectively track such unseen slot values
without compromising performance on slot values seen during training. The
proposed model is also flexible in extending the candidate list without
requiring any retraining or change in the model. We evaluate the proposed model
on various benchmark datasets (DSTC2, DSTC3 and WoZ2.0) and show that our
approach, outperform other end-to-end data-driven approaches in tracking unseen
slot values and also provides significant advantages in modelling for DST
Scalable Neural Dialogue State Tracking
A Dialogue State Tracker (DST) is a key component in a dialogue system aiming
at estimating the beliefs of possible user goals at each dialogue turn. Most of
the current DST trackers make use of recurrent neural networks and are based on
complex architectures that manage several aspects of a dialogue, including the
user utterance, the system actions, and the slot-value pairs defined in a
domain ontology. However, the complexity of such neural architectures incurs
into a considerable latency in the dialogue state prediction, which limits the
deployments of the models in real-world applications, particularly when task
scalability (i.e. amount of slots) is a crucial factor. In this paper, we
propose an innovative neural model for dialogue state tracking, named Global
encoder and Slot-Attentive decoders (G-SAT), which can predict the dialogue
state with a very low latency time, while maintaining high-level performance.
We report experiments on three different languages (English, Italian, and
German) of the WoZ2.0 dataset, and show that the proposed approach provides
competitive advantages over state-of-art DST systems, both in terms of accuracy
and in terms of time complexity for predictions, being over 15 times faster
than the other systems.Comment: 8 pages, 3 figures, Accepted at ASRU 201
An End-to-End Trainable Neural Network Model with Belief Tracking for Task-Oriented Dialog
We present a novel end-to-end trainable neural network model for
task-oriented dialog systems. The model is able to track dialog state, issue
API calls to knowledge base (KB), and incorporate structured KB query results
into system responses to successfully complete task-oriented dialogs. The
proposed model produces well-structured system responses by jointly learning
belief tracking and KB result processing conditioning on the dialog history. We
evaluate the model in a restaurant search domain using a dataset that is
converted from the second Dialog State Tracking Challenge (DSTC2) corpus.
Experiment results show that the proposed model can robustly track dialog state
given the dialog history. Moreover, our model demonstrates promising results in
producing appropriate system responses, outperforming prior end-to-end
trainable neural network models using per-response accuracy evaluation metrics.Comment: Published at Interspeech 201
- …