11,058 research outputs found
Multi-Dialect Speech Recognition With A Single Sequence-To-Sequence Model
Sequence-to-sequence models provide a simple and elegant solution for
building speech recognition systems by folding separate components of a typical
system, namely acoustic (AM), pronunciation (PM) and language (LM) models into
a single neural network. In this work, we look at one such sequence-to-sequence
model, namely listen, attend and spell (LAS), and explore the possibility of
training a single model to serve different English dialects, which simplifies
the process of training multi-dialect systems without the need for separate AM,
PM and LMs for each dialect. We show that simply pooling the data from all
dialects into one LAS model falls behind the performance of a model fine-tuned
on each dialect. We then look at incorporating dialect-specific information
into the model, both by modifying the training targets by inserting the dialect
symbol at the end of the original grapheme sequence and also feeding a 1-hot
representation of the dialect information into all layers of the model.
Experimental results on seven English dialects show that our proposed system is
effective in modeling dialect variations within a single LAS model,
outperforming a LAS model trained individually on each of the seven dialects by
3.1 ~ 16.5% relative.Comment: submitted to ICASSP 201
Embedding-Based Speaker Adaptive Training of Deep Neural Networks
An embedding-based speaker adaptive training (SAT) approach is proposed and
investigated in this paper for deep neural network acoustic modeling. In this
approach, speaker embedding vectors, which are a constant given a particular
speaker, are mapped through a control network to layer-dependent element-wise
affine transformations to canonicalize the internal feature representations at
the output of hidden layers of a main network. The control network for
generating the speaker-dependent mappings is jointly estimated with the main
network for the overall speaker adaptive acoustic modeling. Experiments on
large vocabulary continuous speech recognition (LVCSR) tasks show that the
proposed SAT scheme can yield superior performance over the widely-used
speaker-aware training using i-vectors with speaker-adapted input features
Multilingual Training and Cross-lingual Adaptation on CTC-based Acoustic Model
Multilingual models for Automatic Speech Recognition (ASR) are attractive as
they have been shown to benefit from more training data, and better lend
themselves to adaptation to under-resourced languages. However, initialisation
from monolingual context-dependent models leads to an explosion of
context-dependent states. Connectionist Temporal Classification (CTC) is a
potential solution to this as it performs well with monophone labels.
We investigate multilingual CTC in the context of adaptation and
regularisation techniques that have been shown to be beneficial in more
conventional contexts. The multilingual model is trained to model a universal
International Phonetic Alphabet (IPA)-based phone set using the CTC loss
function. Learning Hidden Unit Contribution (LHUC) is investigated to perform
language adaptive training. In addition, dropout during cross-lingual
adaptation is also studied and tested in order to mitigate the overfitting
problem.
Experiments show that the performance of the universal phoneme-based CTC
system can be improved by applying LHUC and it is extensible to new phonemes
during cross-lingual adaptation. Updating all the parameters shows consistent
improvement on limited data. Applying dropout during adaptation can further
improve the system and achieve competitive performance with Deep Neural Network
/ Hidden Markov Model (DNN/HMM) systems on limited data
- …