272 research outputs found
A Multi-Task Approach to Incremental Dialogue State Tracking
Incrementality is a fundamental feature of language in real world use. To this point, however, the vast majority of work in automated dialogue processing has focused on language as turn based. In this paper we explore the challenge of incremental dialogue state tracking through the development and analysis of a multi-task approach to incremental dialogue state tracking. We present the design of our incremental dialogue state tracker in detail and provide evaluation against the well known Dialogue State Tracking Challenge 2 (DSTC2) dataset. In addition to a standard evaluation of the tracker, we also provide an analysis of the Incrementality phenomenon in our model’s performance by analyzing how early our models can produce correct predictions and how stable those predictions are. We find that the Multi-Task Learning-based model achieves state-of-the-art results for incremental processing
A Robust Data-Driven Approach for Dialogue State Tracking of Unseen Slot Values
A Dialogue State Tracker is a key component in dialogue systems which
estimates the beliefs of possible user goals at each dialogue turn. Deep
learning approaches using recurrent neural networks have shown state-of-the-art
performance for the task of dialogue state tracking. Generally, these
approaches assume a predefined candidate list and struggle to predict any new
dialogue state values that are not seen during training. This makes extending
the candidate list for a slot without model retaining infeasible and also has
limitations in modelling for low resource domains where training data for slot
values are expensive. In this paper, we propose a novel dialogue state tracker
based on copying mechanism that can effectively track such unseen slot values
without compromising performance on slot values seen during training. The
proposed model is also flexible in extending the candidate list without
requiring any retraining or change in the model. We evaluate the proposed model
on various benchmark datasets (DSTC2, DSTC3 and WoZ2.0) and show that our
approach, outperform other end-to-end data-driven approaches in tracking unseen
slot values and also provides significant advantages in modelling for DST
Dialog State Tracking: A Neural Reading Comprehension Approach
Dialog state tracking is used to estimate the current belief state of a
dialog given all the preceding conversation. Machine reading comprehension, on
the other hand, focuses on building systems that read passages of text and
answer questions that require some understanding of passages. We formulate
dialog state tracking as a reading comprehension task to answer the question
after reading conversational
context. In contrast to traditional state tracking methods where the dialog
state is often predicted as a distribution over a closed set of all the
possible slot values within an ontology, our method uses a simple
attention-based neural network to point to the slot values within the
conversation. Experiments on MultiWOZ-2.0 cross-domain dialog dataset show that
our simple system can obtain similar accuracies compared to the previous more
complex methods. By exploiting recent advances in contextual word embeddings,
adding a model that explicitly tracks whether a slot value should be carried
over to the next turn, and combining our method with a traditional joint state
tracking method that relies on closed set vocabulary, we can obtain a
joint-goal accuracy of on the standard test split, exceeding current
state-of-the-art by **.Comment: 10 pages, to appear in Special Interest Group on Discourse and
Dialogue (SIGDIAL) 2019 (ORAL
- …