36,480 research outputs found
Multi-Dialect Speech Recognition With A Single Sequence-To-Sequence Model
Sequence-to-sequence models provide a simple and elegant solution for
building speech recognition systems by folding separate components of a typical
system, namely acoustic (AM), pronunciation (PM) and language (LM) models into
a single neural network. In this work, we look at one such sequence-to-sequence
model, namely listen, attend and spell (LAS), and explore the possibility of
training a single model to serve different English dialects, which simplifies
the process of training multi-dialect systems without the need for separate AM,
PM and LMs for each dialect. We show that simply pooling the data from all
dialects into one LAS model falls behind the performance of a model fine-tuned
on each dialect. We then look at incorporating dialect-specific information
into the model, both by modifying the training targets by inserting the dialect
symbol at the end of the original grapheme sequence and also feeding a 1-hot
representation of the dialect information into all layers of the model.
Experimental results on seven English dialects show that our proposed system is
effective in modeling dialect variations within a single LAS model,
outperforming a LAS model trained individually on each of the seven dialects by
3.1 ~ 16.5% relative.Comment: submitted to ICASSP 201
Optimizing expected word error rate via sampling for speech recognition
State-level minimum Bayes risk (sMBR) training has become the de facto
standard for sequence-level training of speech recognition acoustic models. It
has an elegant formulation using the expectation semiring, and gives large
improvements in word error rate (WER) over models trained solely using
cross-entropy (CE) or connectionist temporal classification (CTC). sMBR
training optimizes the expected number of frames at which the reference and
hypothesized acoustic states differ. It may be preferable to optimize the
expected WER, but WER does not interact well with the expectation semiring, and
previous approaches based on computing expected WER exactly involve expanding
the lattices used during training. In this paper we show how to perform
optimization of the expected WER by sampling paths from the lattices used
during conventional sMBR training. The gradient of the expected WER is itself
an expectation, and so may be approximated using Monte Carlo sampling. We show
experimentally that optimizing WER during acoustic model training gives 5%
relative improvement in WER over a well-tuned sMBR baseline on a 2-channel
query recognition task (Google Home)
Efficient Implementation of the Room Simulator for Training Deep Neural Network Acoustic Models
In this paper, we describe how to efficiently implement an acoustic room
simulator to generate large-scale simulated data for training deep neural
networks. Even though Google Room Simulator in [1] was shown to be quite
effective in reducing the Word Error Rates (WERs) for far-field applications by
generating simulated far-field training sets, it requires a very large number
of Fast Fourier Transforms (FFTs) of large size. Room Simulator in [1] used
approximately 80 percent of Central Processing Unit (CPU) usage in our CPU +
Graphics Processing Unit (GPU) training architecture [2]. In this work, we
implement an efficient OverLap Addition (OLA) based filtering using the
open-source FFTW3 library. Further, we investigate the effects of the Room
Impulse Response (RIR) lengths. Experimentally, we conclude that we can cut the
tail portions of RIRs whose power is less than 20 dB below the maximum power
without sacrificing the speech recognition accuracy. However, we observe that
cutting RIR tail more than this threshold harms the speech recognition accuracy
for rerecorded test sets. Using these approaches, we were able to reduce CPU
usage for the room simulator portion down to 9.69 percent in CPU/GPU training
architecture. Profiling result shows that we obtain 22.4 times speed-up on a
single machine and 37.3 times speed up on Google's distributed training
infrastructure.Comment: Published at INTERSPEECH 2018.
(https://www.isca-speech.org/archive/Interspeech_2018/abstracts/2566.html
- …