144 research outputs found
On adaptive decision rules and decision parameter adaptation for automatic speech recognition
Recent advances in automatic speech recognition are accomplished by designing a plug-in maximum a posteriori decision rule such that the forms of the acoustic and language model distributions are specified and the parameters of the assumed distributions are estimated from a collection of speech and language training corpora. Maximum-likelihood point estimation is by far the most prevailing training method. However, due to the problems of unknown speech distributions, sparse training data, high spectral and temporal variabilities in speech, and possible mismatch between training and testing conditions, a dynamic training strategy is needed. To cope with the changing speakers and speaking conditions in real operational conditions for high-performance speech recognition, such paradigms incorporate a small amount of speaker and environment specific adaptation data into the training process. Bayesian adaptive learning is an optimal way to combine prior knowledge in an existing collection of general models with a new set of condition-specific adaptation data. In this paper, the mathematical framework for Bayesian adaptation of acoustic and language model parameters is first described. Maximum a posteriori point estimation is then developed for hidden Markov models and a number of useful parameters densities commonly used in automatic speech recognition and natural language processing.published_or_final_versio
Speech Recognition
Chapters in the first part of the book cover all the essential speech processing techniques for building robust, automatic speech recognition systems: the representation for speech signals and the methods for speech-features extraction, acoustic and language modeling, efficient algorithms for searching the hypothesis space, and multimodal approaches to speech recognition. The last part of the book is devoted to other speech processing applications that can use the information from automatic speech recognition for speaker identification and tracking, for prosody modeling in emotion-detection systems and in other speech processing applications that are able to operate in real-world environments, like mobile communication services and smart homes
Textless Speech-to-Speech Translation on Real Data
We present a textless speech-to-speech translation (S2ST) system that can
translate speech from one language into another language and can be built
without the need of any text data. Different from existing work in the
literature, we tackle the challenge in modeling multi-speaker target speech and
train the systems with real-world S2ST data. The key to our approach is a
self-supervised unit-based speech normalization technique, which finetunes a
pre-trained speech encoder with paired audios from multiple speakers and a
single reference speaker to reduce the variations due to accents, while
preserving the lexical content. With only 10 minutes of paired data for speech
normalization, we obtain on average 3.2 BLEU gain when training the S2ST model
on the VoxPopuli S2ST dataset, compared to a baseline trained on un-normalized
speech target. We also incorporate automatically mined S2ST data and show an
additional 2.0 BLEU gain. To our knowledge, we are the first to establish a
textless S2ST technique that can be trained with real-world data and works for
multiple language pairs. Audio samples are available at
https://facebookresearch.github.io/speech_translation/textless_s2st_real_data/index.html .Comment: Accepted to NAACL 2022 (long paper
Lexical Access Model for Italian -- Modeling human speech processing: identification of words in running speech toward lexical access based on the detection of landmarks and other acoustic cues to features
Modelling the process that a listener actuates in deriving the words intended
by a speaker requires setting a hypothesis on how lexical items are stored in
memory. This work aims at developing a system that imitates humans when
identifying words in running speech and, in this way, provide a framework to
better understand human speech processing. We build a speech recognizer for
Italian based on the principles of Stevens' model of Lexical Access in which
words are stored as hierarchical arrangements of distinctive features (Stevens,
K. N. (2002). "Toward a model for lexical access based on acoustic landmarks
and distinctive features," J. Acoust. Soc. Am., 111(4):1872-1891). Over the
past few decades, the Speech Communication Group at the Massachusetts Institute
of Technology (MIT) developed a speech recognition system for English based on
this approach. Italian will be the first language beyond English to be
explored; the extension to another language provides the opportunity to test
the hypothesis that words are represented in memory as a set of
hierarchically-arranged distinctive features, and reveal which of the
underlying mechanisms may have a language-independent nature. This paper also
introduces a new Lexical Access corpus, the LaMIT database, created and labeled
specifically for this work, that will be provided freely to the speech research
community. Future developments will test the hypothesis that specific acoustic
discontinuities - called landmarks - that serve as cues to features, are
language independent, while other cues may be language-dependent, with powerful
implications for understanding how the human brain recognizes speech.Comment: Submitted to Language and Speech, 202
Automatic Speech Recognition for ageing voices
With ageing, human voices undergo several changes which are typically characterised
by increased hoarseness, breathiness, changes in articulatory patterns and slower speaking
rate. The focus of this thesis is to understand the impact of ageing on Automatic
Speech Recognition (ASR) performance and improve the ASR accuracies for older
voices.
Baseline results on three corpora indicate that the word error rates (WER) for older
adults are significantly higher than those of younger adults and the decrease in accuracies
is higher for males speakers as compared to females.
Acoustic parameters such as jitter and shimmer that measure glottal source disfluencies
were found to be significantly higher for older adults. However, the hypothesis
that these changes explain the differences in WER for the two age groups is proven incorrect.
Experiments with artificial introduction of glottal source disfluencies in speech
from younger adults do not display a significant impact on WERs. Changes in fundamental
frequency observed quite often in older voices has a marginal impact on ASR
accuracies.
Analysis of phoneme errors between younger and older speakers shows a pattern
of certain phonemes especially lower vowels getting more affected with ageing. These
changes however are seen to vary across speakers. Another factor that is strongly associated
with ageing voices is a decrease in the rate of speech. Experiments to analyse
the impact of slower speaking rate on ASR accuracies indicate that the insertion errors
increase while decoding slower speech with models trained on relatively faster speech.
We then propose a way to characterise speakers in acoustic space based on speaker
adaptation transforms and observe that speakers (especially males) can be segregated
with reasonable accuracies based on age. Inspired by this, we look at supervised hierarchical
acoustic models based on gender and age. Significant improvements in word
accuracies are achieved over the baseline results with such models. The idea is then extended
to construct unsupervised hierarchical models which also outperform the baseline
models by a good margin.
Finally, we hypothesize that the ASR accuracies can be improved by augmenting
the adaptation data with speech from acoustically closest speakers. A strategy to select
the augmentation speakers is proposed. Experimental results on two corpora indicate
that the hypothesis holds true only when the amount of available adaptation is limited
to a few seconds. The efficacy of such a speaker selection strategy is analysed for both
younger and older adults
- …