4 research outputs found
Multilingual Adaptation of RNN Based ASR Systems
In this work, we focus on multilingual systems based on recurrent neural
networks (RNNs), trained using the Connectionist Temporal Classification (CTC)
loss function. Using a multilingual set of acoustic units poses difficulties.
To address this issue, we proposed Language Feature Vectors (LFVs) to train
language adaptive multilingual systems. Language adaptation, in contrast to
speaker adaptation, needs to be applied not only on the feature level, but also
to deeper layers of the network. In this work, we therefore extended our
previous approach by introducing a novel technique which we call "modulation".
Based on this method, we modulated the hidden layers of RNNs using LFVs. We
evaluated this approach in both full and low resource conditions, as well as
for grapheme and phone based systems. Lower error rates throughout the
different conditions could be achieved by the use of the modulation.Comment: 5 pages, 1 figure, to appear in 2018 IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP 2018
Cross-Domain Adaptation of Spoken Language Identification for Related Languages: The Curious Case of Slavic Languages
State-of-the-art spoken language identification (LID) systems, which are
based on end-to-end deep neural networks, have shown remarkable success not
only in discriminating between distant languages but also between
closely-related languages or even different spoken varieties of the same
language. However, it is still unclear to what extent neural LID models
generalize to speech samples with different acoustic conditions due to domain
shift. In this paper, we present a set of experiments to investigate the impact
of domain mismatch on the performance of neural LID systems for a subset of six
Slavic languages across two domains (read speech and radio broadcast) and
examine two low-level signal descriptors (spectral and cepstral features) for
this task. Our experiments show that (1) out-of-domain speech samples severely
hinder the performance of neural LID models, and (2) while both spectral and
cepstral features show comparable performance within-domain, spectral features
show more robustness under domain mismatch. Moreover, we apply unsupervised
domain adaptation to minimize the discrepancy between the two domains in our
study. We achieve relative accuracy improvements that range from 9% to 77%
depending on the diversity of acoustic conditions in the source domain.Comment: To appear in INTERSPEECH 202
An Investigation of Deep Neural Networks for Multilingual Speech Recognition Training and Adaptation
Different training and adaptation techniques for multilingual Automatic Speech Recognition (ASR) are explored in the context of hybrid systems, exploiting Deep Neural Networks (DNN) and Hidden Markov Models (HMM). In multilingual DNN training, the hidden layers (possibly extracting bottleneck features) are usually shared across languages, and the output layer can either model multiple sets of language-specific senones or one single universal IPA-based multilingual senone set. Both architectures are investigated, exploiting and comparing different language adaptive training (LAT) techniques originating from successful DNN-based speaker-adaptation. More specifically, speaker adaptive training methods such as Cluster Adaptive Training (CAT) and Learning Hidden Unit Contribution (LHUC) are considered. In addition, a language adaptive output architecture for IPA-based universal DNN is also studied and tested. Experiments show that LAT improves the performance and adaptation on the top layer further improves the accuracy. By combining state-level minimum Bayes risk (sMBR) sequence training with LAT, we show that a language adaptively trained IPA-based universal DNN outperforms a monolingually sequence trained model