49,263 research outputs found
A Batch Noise Contrastive Estimation Approach for Training Large Vocabulary Language Models
Training large vocabulary Neural Network Language Models (NNLMs) is a
difficult task due to the explicit requirement of the output layer
normalization, which typically involves the evaluation of the full softmax
function over the complete vocabulary. This paper proposes a Batch Noise
Contrastive Estimation (B-NCE) approach to alleviate this problem. This is
achieved by reducing the vocabulary, at each time step, to the target words in
the batch and then replacing the softmax by the noise contrastive estimation
approach, where these words play the role of targets and noise samples at the
same time. In doing so, the proposed approach can be fully formulated and
implemented using optimal dense matrix operations. Applying B-NCE to train
different NNLMs on the Large Text Compression Benchmark (LTCB) and the One
Billion Word Benchmark (OBWB) shows a significant reduction of the training
time with no noticeable degradation of the models performance. This paper also
presents a new baseline comparative study of different standard NNLMs on the
large OBWB on a single Titan-X GPU.Comment: Accepted for publication at INTERSPEECH'1
Language Modeling for Multi-Domain Speech-Driven Text Retrieval
We report experimental results associated with speech-driven text retrieval,
which facilitates retrieving information in multiple domains with spoken
queries. Since users speak contents related to a target collection, we produce
language models used for speech recognition based on the target collection, so
as to improve both the recognition and retrieval accuracy. Experiments using
existing test collections combined with dictated queries showed the
effectiveness of our method
Advances in All-Neural Speech Recognition
This paper advances the design of CTC-based all-neural (or end-to-end) speech
recognizers. We propose a novel symbol inventory, and a novel iterated-CTC
method in which a second system is used to transform a noisy initial output
into a cleaner version. We present a number of stabilization and initialization
methods we have found useful in training these networks. We evaluate our system
on the commonly used NIST 2000 conversational telephony test set, and
significantly exceed the previously published performance of similar systems,
both with and without the use of an external language model and decoding
technology
Improving End-to-End Speech Recognition with Policy Learning
Connectionist temporal classification (CTC) is widely used for maximum
likelihood learning in end-to-end speech recognition models. However, there is
usually a disparity between the negative maximum likelihood and the performance
metric used in speech recognition, e.g., word error rate (WER). This results in
a mismatch between the objective function and metric during training. We show
that the above problem can be mitigated by jointly training with maximum
likelihood and policy gradient. In particular, with policy learning we are able
to directly optimize on the (otherwise non-differentiable) performance metric.
We show that joint training improves relative performance by 4% to 13% for our
end-to-end model as compared to the same model learned through maximum
likelihood. The model achieves 5.53% WER on Wall Street Journal dataset, and
5.42% and 14.70% on Librispeech test-clean and test-other set, respectively
- âŠ