268 research outputs found
Dialogue history integration into end-to-end signal-to-concept spoken language understanding systems
This work investigates the embeddings for representing dialog history in
spoken language understanding (SLU) systems. We focus on the scenario when the
semantic information is extracted directly from the speech signal by means of a
single end-to-end neural network model. We proposed to integrate dialogue
history into an end-to-end signal-to-concept SLU system. The dialog history is
represented in the form of dialog history embedding vectors (so-called
h-vectors) and is provided as an additional information to end-to-end SLU
models in order to improve the system performance. Three following types of
h-vectors are proposed and experimentally evaluated in this paper: (1)
supervised-all embeddings predicting bag-of-concepts expected in the answer of
the user from the last dialog system response; (2) supervised-freq embeddings
focusing on predicting only a selected set of semantic concept (corresponding
to the most frequent errors in our experiments); and (3) unsupervised
embeddings. Experiments on the MEDIA corpus for the semantic slot filling task
demonstrate that the proposed h-vectors improve the model performance.Comment: Accepted for ICASSP 2020 (Submitted: October 21, 2019
Effectiveness of Text, Acoustic, and Lattice-based representations in Spoken Language Understanding tasks
In this paper, we perform an exhaustive evaluation of different
representations to address the intent classification problem in a Spoken
Language Understanding (SLU) setup. We benchmark three types of systems to
perform the SLU intent detection task: 1) text-based, 2) lattice-based, and a
novel 3) multimodal approach. Our work provides a comprehensive analysis of
what could be the achievable performance of different state-of-the-art SLU
systems under different circumstances, e.g., automatically- vs.
manually-generated transcripts. We evaluate the systems on the publicly
available SLURP spoken language resource corpus. Our results indicate that
using richer forms of Automatic Speech Recognition (ASR) outputs, namely
word-consensus-networks, allows the SLU system to improve in comparison to the
1-best setup (5.5% relative improvement). However, crossmodal approaches, i.e.,
learning from acoustic and text embeddings, obtains performance similar to the
oracle setup, a relative improvement of 17.8% over the 1-best configuration,
being a recommended alternative to overcome the limitations of working with
automatically generated transcripts.Comment: Accepted in ICASSP 202
- …