164 research outputs found
Towards Zero-Shot Frame Semantic Parsing for Domain Scaling
State-of-the-art slot filling models for goal-oriented human/machine
conversational language understanding systems rely on deep learning methods.
While multi-task training of such models alleviates the need for large
in-domain annotated datasets, bootstrapping a semantic parsing model for a new
domain using only the semantic frame, such as the back-end API or knowledge
graph schema, is still one of the holy grail tasks of language understanding
for dialogue systems. This paper proposes a deep learning based approach that
can utilize only the slot description in context without the need for any
labeled or unlabeled in-domain examples, to quickly bootstrap a new domain. The
main idea of this paper is to leverage the encoding of the slot names and
descriptions within a multi-task deep learned slot filling model, to implicitly
align slots across domains. The proposed approach is promising for solving the
domain scaling problem and eliminating the need for any manually annotated data
or explicit schema alignment. Furthermore, our experiments on multiple domains
show that this approach results in significantly better slot-filling
performance when compared to using only in-domain data, especially in the low
data regime.Comment: 4 pages + 1 reference
5IDER: Unified Query Rewriting for Steering, Intent Carryover, Disfluencies, Entity Carryover and Repair
Providing voice assistants the ability to navigate multi-turn conversations
is a challenging problem. Handling multi-turn interactions requires the system
to understand various conversational use-cases, such as steering, intent
carryover, disfluencies, entity carryover, and repair. The complexity of this
problem is compounded by the fact that these use-cases mix with each other,
often appearing simultaneously in natural language. This work proposes a
non-autoregressive query rewriting architecture that can handle not only the
five aforementioned tasks, but also complex compositions of these use-cases. We
show that our proposed model has competitive single task performance compared
to the baseline approach, and even outperforms a fine-tuned T5 model in
use-case compositions, despite being 15 times smaller in parameters and 25
times faster in latency.Comment: Interspeech 202
- …