477 research outputs found
Modeling ASR Ambiguity for Dialogue State Tracking Using Word Confusion Networks
Spoken dialogue systems typically use a list of top-N ASR hypotheses for
inferring the semantic meaning and tracking the state of the dialogue. However
ASR graphs, such as confusion networks (confnets), provide a compact
representation of a richer hypothesis space than a top-N ASR list. In this
paper, we study the benefits of using confusion networks with a
state-of-the-art neural dialogue state tracker (DST). We encode the
2-dimensional confnet into a 1-dimensional sequence of embeddings using an
attentional confusion network encoder which can be used with any DST system.
Our confnet encoder is plugged into the state-of-the-art 'Global-locally
Self-Attentive Dialogue State Tacker' (GLAD) model for DST and obtains
significant improvements in both accuracy and inference time compared to using
top-N ASR hypotheses.Comment: Accepted at Interspeech-202
ConfNet2Seq: Full Length Answer Generation from Spoken Questions
Conversational and task-oriented dialogue systems aim to interact with the
user using natural responses through multi-modal interfaces, such as text or
speech. These desired responses are in the form of full-length natural answers
generated over facts retrieved from a knowledge source. While the task of
generating natural answers to questions from an answer span has been widely
studied, there has been little research on natural sentence generation over
spoken content. We propose a novel system to generate full length natural
language answers from spoken questions and factoid answers. The spoken sequence
is compactly represented as a confusion network extracted from a pre-trained
Automatic Speech Recognizer. This is the first attempt towards generating
full-length natural answers from a graph input(confusion network) to the best
of our knowledge. We release a large-scale dataset of 259,788 samples of spoken
questions, their factoid answers and corresponding full-length textual answers.
Following our proposed approach, we achieve comparable performance with best
ASR hypothesis.Comment: Accepted at Text, Speech and Dialogue, 202
Confusion2vec 2.0: Enriching Ambiguous Spoken Language Representations with Subwords
Word vector representations enable machines to encode human language for
spoken language understanding and processing. Confusion2vec, motivated from
human speech production and perception, is a word vector representation which
encodes ambiguities present in human spoken language in addition to semantics
and syntactic information. Confusion2vec provides a robust spoken language
representation by considering inherent human language ambiguities. In this
paper, we propose a novel word vector space estimation by unsupervised learning
on lattices output by an automatic speech recognition (ASR) system. We encode
each word in confusion2vec vector space by its constituent subword character
n-grams. We show the subword encoding helps better represent the acoustic
perceptual ambiguities in human spoken language via information modeled on
lattice structured ASR output. The usefulness of the proposed Confusion2vec
representation is evaluated using semantic, syntactic and acoustic analogy and
word similarity tasks. We also show the benefits of subword modeling for
acoustic ambiguity representation on the task of spoken language intent
detection. The results significantly outperform existing word vector
representations when evaluated on erroneous ASR outputs. We demonstrate that
Confusion2vec subword modeling eliminates the need for retraining/adapting the
natural language understanding models on ASR transcripts
Spoken content retrieval: A survey of techniques and technologies
Speech media, that is, digital audio and video containing spoken content, has blossomed in recent years. Large collections are accruing on the Internet as well as in private and enterprise settings. This growth has motivated extensive research on techniques and technologies that facilitate reliable indexing and retrieval. Spoken content retrieval (SCR) requires the combination of audio and speech processing technologies with methods from information retrieval (IR). SCR research initially investigated planned speech structured in document-like units, but has subsequently shifted focus to more informal spoken content produced spontaneously, outside of the studio and in conversational settings. This survey provides an overview of the field of SCR encompassing component technologies, the relationship of SCR to text IR and automatic speech recognition and user interaction issues. It is aimed at researchers with backgrounds in speech technology or IR who are seeking deeper insight on how these fields are integrated to support research and development, thus addressing the core challenges of SCR
OLISIA: a Cascade System for Spoken Dialogue State Tracking
Though Dialogue State Tracking (DST) is a core component of spoken dialogue
systems, recent work on this task mostly deals with chat corpora, disregarding
the discrepancies between spoken and written language.In this paper, we propose
OLISIA, a cascade system which integrates an Automatic Speech Recognition (ASR)
model and a DST model. We introduce several adaptations in the ASR and DST
modules to improve integration and robustness to spoken conversations.With
these adaptations, our system ranked first in DSTC11 Track 3, a benchmark to
evaluate spoken DST. We conduct an in-depth analysis of the results and find
that normalizing the ASR outputs and adapting the DST inputs through data
augmentation, along with increasing the pre-trained models size all play an
important role in reducing the performance discrepancy between written and
spoken conversations
Speaker Normalization Using Cortical Strip Maps: A Neural Model for Steady State Vowel Identification
Auditory signals of speech are speaker-dependent, but representations of language meaning are speaker-independent. Such a transformation enables speech to be understood from different speakers. A neural model is presented that performs speaker normalization to generate a pitchindependent representation of speech sounds, while also preserving information about speaker identity. This speaker-invariant representation is categorized into unitized speech items, which input to sequential working memories whose distributed patterns can be categorized, or chunked, into syllable and word representations. The proposed model fits into an emerging model of auditory streaming and speech categorization. The auditory streaming and speaker normalization parts of the model both use multiple strip representations and asymmetric competitive circuits, thereby suggesting that these two circuits arose from similar neural designs. The normalized speech items are rapidly categorized and stably remembered by Adaptive Resonance Theory circuits. Simulations use synthesized steady-state vowels from the Peterson and Barney [J. Acoust. Soc. Am. 24, 175-184 (1952)] vowel database and achieve accuracy rates similar to those achieved by human listeners. These results are compared to behavioral data and other speaker normalization models.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624
Speaker Normalization Using Cortical Strip Maps: A Neural Model for Steady State vowel Categorization
Auditory signals of speech are speaker-dependent, but representations of language meaning are speaker-independent. The transformation from speaker-dependent to speaker-independent language representations enables speech to be learned and understood from different speakers. A neural model is presented that performs speaker normalization to generate a pitch-independent representation of speech sounds, while also preserving information about speaker identity. This speaker-invariant representation is categorized into unitized speech items, which input to sequential working memories whose distributed patterns can be categorized, or chunked, into syllable and word representations. The proposed model fits into an emerging model of auditory streaming and speech categorization. The auditory streaming and speaker normalization parts of the model both use multiple strip representations and asymmetric competitive circuits, thereby suggesting that these two circuits arose from similar neural designs. The normalized speech items are rapidly categorized and stably remembered by Adaptive Resonance Theory circuits. Simulations use synthesized steady-state vowels from the Peterson and Barney [J. Acoust. Soc. Am. 24, 175-184 (1952)] vowel database and achieve accuracy rates similar to those achieved by human listeners. These results are compared to behavioral data and other speaker normalization models.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624
- …