1,541 research outputs found
Multi-level Memory for Task Oriented Dialogs
Recent end-to-end task oriented dialog systems use memory architectures to
incorporate external knowledge in their dialogs. Current work makes simplifying
assumptions about the structure of the knowledge base, such as the use of
triples to represent knowledge, and combines dialog utterances (context) as
well as knowledge base (KB) results as part of the same memory. This causes an
explosion in the memory size, and makes the reasoning over memory harder. In
addition, such a memory design forces hierarchical properties of the data to be
fit into a triple structure of memory. This requires the memory reader to infer
relationships across otherwise connected attributes. In this paper we relax the
strong assumptions made by existing architectures and separate memories used
for modeling dialog context and KB results. Instead of using triples to store
KB results, we introduce a novel multi-level memory architecture consisting of
cells for each query and their corresponding results. The multi-level memory
first addresses queries, followed by results and finally each key-value pair
within a result. We conduct detailed experiments on three publicly available
task oriented dialog data sets and we find that our method conclusively
outperforms current state-of-the-art models. We report a 15-25% increase in
both entity F1 and BLEU scores.Comment: Accepted as full paper at NAACL 201
Sequential Dialogue Context Modeling for Spoken Language Understanding
Spoken Language Understanding (SLU) is a key component of goal oriented
dialogue systems that would parse user utterances into semantic frame
representations. Traditionally SLU does not utilize the dialogue history beyond
the previous system turn and contextual ambiguities are resolved by the
downstream components. In this paper, we explore novel approaches for modeling
dialogue context in a recurrent neural network (RNN) based language
understanding system. We propose the Sequential Dialogue Encoder Network, that
allows encoding context from the dialogue history in chronological order. We
compare the performance of our proposed architecture with two context models,
one that uses just the previous turn context and another that encodes dialogue
context in a memory network, but loses the order of utterances in the dialogue
history. Experiments with a multi-domain dialogue dataset demonstrate that the
proposed architecture results in reduced semantic frame error rates.Comment: 8 + 2 pages, Updated 10/17: Updated typos in abstract, Updated 07/07:
Updated Title, abstract and few minor change
- …