4 research outputs found
Speaker-Sensitive Dual Memory Networks for Multi-Turn Slot Tagging
In multi-turn dialogs, natural language understanding models can introduce
obvious errors by being blind to contextual information. To incorporate dialog
history, we present a neural architecture with Speaker-Sensitive Dual Memory
Networks which encode utterances differently depending on the speaker. This
addresses the different extents of information available to the system - the
system knows only the surface form of user utterances while it has the exact
semantics of system output. We performed experiments on real user data from
Microsoft Cortana, a commercial personal assistant. The result showed a
significant performance improvement over the state-of-the-art slot tagging
models using contextual information.Comment: 5 pages conference paper accepted to IEEE ASRU 2017. Will be
published in December 201
Coupled Representation Learning for Domains, Intents and Slots in Spoken Language Understanding
Representation learning is an essential problem in a wide range of
applications and it is important for performing downstream tasks successfully.
In this paper, we propose a new model that learns coupled representations of
domains, intents, and slots by taking advantage of their hierarchical
dependency in a Spoken Language Understanding system. Our proposed model learns
the vector representation of intents based on the slots tied to these intents
by aggregating the representations of the slots. Similarly, the vector
representation of a domain is learned by aggregating the representations of the
intents tied to a specific domain. To the best of our knowledge, it is the
first approach to jointly learning the representations of domains, intents, and
slots using their hierarchical relationships. The experimental results
demonstrate the effectiveness of the representations learned by our model, as
evidenced by improved performance on the contextual cross-domain reranking
task.Comment: IEEE SLT 201
A Scalable Neural Shortlisting-Reranking Approach for Large-Scale Domain Classification in Natural Language Understanding
Intelligent personal digital assistants (IPDAs), a popular real-life
application with spoken language understanding capabilities, can cover
potentially thousands of overlapping domains for natural language
understanding, and the task of finding the best domain to handle an utterance
becomes a challenging problem on a large scale. In this paper, we propose a set
of efficient and scalable neural shortlisting-reranking models for large-scale
domain classification in IPDAs. The shortlisting stage focuses on efficiently
trimming all domains down to a list of k-best candidate domains, and the
reranking stage performs a list-wise reranking of the initial k-best domains
with additional contextual information. We show the effectiveness of our
approach with extensive experiments on 1,500 IPDA domains.Comment: Accepted to NAACL 201
Efficient Large-Scale Domain Classification with Personalized Attention
In this paper, we explore the task of mapping spoken language utterances to
one of thousands of natural language understanding domains in intelligent
personal digital assistants (IPDAs). This scenario is observed for many
mainstream IPDAs in industry that allow third parties to develop thousands of
new domains to augment built-in ones to rapidly increase domain coverage and
overall IPDA capabilities. We propose a scalable neural model architecture with
a shared encoder, a novel attention mechanism that incorporates personalization
information and domain-specific classifiers that solves the problem
efficiently. Our architecture is designed to efficiently accommodate new
domains that appear in-between full model retraining cycles with a rapid
bootstrapping mechanism two orders of magnitude faster than retraining. We
account for practical constraints in real-time production systems, and design
to minimize memory footprint and runtime latency. We demonstrate that
incorporating personalization results in significantly more accurate domain
classification in the setting with thousands of overlapping domains.Comment: Accepted to ACL 201