526 research outputs found
No Need for a Lexicon? Evaluating the Value of the Pronunciation Lexica in End-to-End Models
For decades, context-dependent phonemes have been the dominant sub-word unit
for conventional acoustic modeling systems. This status quo has begun to be
challenged recently by end-to-end models which seek to combine acoustic,
pronunciation, and language model components into a single neural network. Such
systems, which typically predict graphemes or words, simplify the recognition
process since they remove the need for a separate expert-curated pronunciation
lexicon to map from phoneme-based units to words. However, there has been
little previous work comparing phoneme-based versus grapheme-based sub-word
units in the end-to-end modeling framework, to determine whether the gains from
such approaches are primarily due to the new probabilistic model, or from the
joint learning of the various components with grapheme-based units.
In this work, we conduct detailed experiments which are aimed at quantifying
the value of phoneme-based pronunciation lexica in the context of end-to-end
models. We examine phoneme-based end-to-end models, which are contrasted
against grapheme-based ones on a large vocabulary English Voice-search task,
where we find that graphemes do indeed outperform phonemes. We also compare
grapheme and phoneme-based approaches on a multi-dialect English task, which
once again confirm the superiority of graphemes, greatly simplifying the system
for recognizing multiple dialects
TranUSR: Phoneme-to-word Transcoder Based Unified Speech Representation Learning for Cross-lingual Speech Recognition
UniSpeech has achieved superior performance in cross-lingual automatic speech
recognition (ASR) by explicitly aligning latent representations to phoneme
units using multi-task self-supervised learning. While the learned
representations transfer well from high-resource to low-resource languages,
predicting words directly from these phonetic representations in downstream ASR
is challenging. In this paper, we propose TranUSR, a two-stage model comprising
a pre-trained UniData2vec and a phoneme-to-word Transcoder. Different from
UniSpeech, UniData2vec replaces the quantized discrete representations with
continuous and contextual representations from a teacher model for
phonetically-aware pre-training. Then, Transcoder learns to translate phonemes
to words with the aid of extra texts, enabling direct word generation.
Experiments on Common Voice show that UniData2vec reduces PER by 5.3% compared
to UniSpeech, while Transcoder yields a 14.4% WER reduction compared to
grapheme fine-tuning.Comment: 5 pages, 3 figures. Accepted by INTERSPEECH 202
Multilingual Training and Cross-lingual Adaptation on CTC-based Acoustic Model
Multilingual models for Automatic Speech Recognition (ASR) are attractive as
they have been shown to benefit from more training data, and better lend
themselves to adaptation to under-resourced languages. However, initialisation
from monolingual context-dependent models leads to an explosion of
context-dependent states. Connectionist Temporal Classification (CTC) is a
potential solution to this as it performs well with monophone labels.
We investigate multilingual CTC in the context of adaptation and
regularisation techniques that have been shown to be beneficial in more
conventional contexts. The multilingual model is trained to model a universal
International Phonetic Alphabet (IPA)-based phone set using the CTC loss
function. Learning Hidden Unit Contribution (LHUC) is investigated to perform
language adaptive training. In addition, dropout during cross-lingual
adaptation is also studied and tested in order to mitigate the overfitting
problem.
Experiments show that the performance of the universal phoneme-based CTC
system can be improved by applying LHUC and it is extensible to new phonemes
during cross-lingual adaptation. Updating all the parameters shows consistent
improvement on limited data. Applying dropout during adaptation can further
improve the system and achieve competitive performance with Deep Neural Network
/ Hidden Markov Model (DNN/HMM) systems on limited data
Deciphering Speech: a Zero-Resource Approach to Cross-Lingual Transfer in ASR
We present a method for cross-lingual training an ASR system using absolutely
no transcribed training data from the target language, and with no phonetic
knowledge of the language in question. Our approach uses a novel application of
a decipherment algorithm, which operates given only unpaired speech and text
data from the target language. We apply this decipherment to phone sequences
generated by a universal phone recogniser trained on out-of-language speech
corpora, which we follow with flat-start semi-supervised training to obtain an
acoustic model for the new language. To the best of our knowledge, this is the
first practical approach to zero-resource cross-lingual ASR which does not rely
on any hand-crafted phonetic information. We carry out experiments on read
speech from the GlobalPhone corpus, and show that it is possible to learn a
decipherment model on just 20 minutes of data from the target language. When
used to generate pseudo-labels for semi-supervised training, we obtain WERs
that range from 32.5% to just 1.9% absolute worse than the equivalent fully
supervised models trained on the same data.Comment: Submitted to Interspeech 202
Multilingual Adaptation of RNN Based ASR Systems
In this work, we focus on multilingual systems based on recurrent neural
networks (RNNs), trained using the Connectionist Temporal Classification (CTC)
loss function. Using a multilingual set of acoustic units poses difficulties.
To address this issue, we proposed Language Feature Vectors (LFVs) to train
language adaptive multilingual systems. Language adaptation, in contrast to
speaker adaptation, needs to be applied not only on the feature level, but also
to deeper layers of the network. In this work, we therefore extended our
previous approach by introducing a novel technique which we call "modulation".
Based on this method, we modulated the hidden layers of RNNs using LFVs. We
evaluated this approach in both full and low resource conditions, as well as
for grapheme and phone based systems. Lower error rates throughout the
different conditions could be achieved by the use of the modulation.Comment: 5 pages, 1 figure, to appear in 2018 IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP 2018
- …