7,483 research outputs found
Light Gated Recurrent Units for Speech Recognition
A field that has directly benefited from the recent advances in deep learning
is Automatic Speech Recognition (ASR). Despite the great achievements of the
past decades, however, a natural and robust human-machine speech interaction
still appears to be out of reach, especially in challenging environments
characterized by significant noise and reverberation. To improve robustness,
modern speech recognizers often employ acoustic models based on Recurrent
Neural Networks (RNNs), that are naturally able to exploit large time contexts
and long-term speech modulations. It is thus of great interest to continue the
study of proper techniques for improving the effectiveness of RNNs in
processing speech signals.
In this paper, we revise one of the most popular RNN models, namely Gated
Recurrent Units (GRUs), and propose a simplified architecture that turned out
to be very effective for ASR. The contribution of this work is two-fold: First,
we analyze the role played by the reset gate, showing that a significant
redundancy with the update gate occurs. As a result, we propose to remove the
former from the GRU design, leading to a more efficient and compact single-gate
model. Second, we propose to replace hyperbolic tangent with ReLU activations.
This variation couples well with batch normalization and could help the model
learn long-term dependencies without numerical issues.
Results show that the proposed architecture, called Light GRU (Li-GRU), not
only reduces the per-epoch training time by more than 30% over a standard GRU,
but also consistently improves the recognition accuracy across different tasks,
input features, noisy conditions, as well as across different ASR paradigms,
ranging from standard DNN-HMM speech recognizers to end-to-end CTC models.Comment: Copyright 2018 IEE
Automatic Quality Estimation for ASR System Combination
Recognizer Output Voting Error Reduction (ROVER) has been widely used for
system combination in automatic speech recognition (ASR). In order to select
the most appropriate words to insert at each position in the output
transcriptions, some ROVER extensions rely on critical information such as
confidence scores and other ASR decoder features. This information, which is
not always available, highly depends on the decoding process and sometimes
tends to over estimate the real quality of the recognized words. In this paper
we propose a novel variant of ROVER that takes advantage of ASR quality
estimation (QE) for ranking the transcriptions at "segment level" instead of:
i) relying on confidence scores, or ii) feeding ROVER with randomly ordered
hypotheses. We first introduce an effective set of features to compensate for
the absence of ASR decoder information. Then, we apply QE techniques to perform
accurate hypothesis ranking at segment-level before starting the fusion
process. The evaluation is carried out on two different tasks, in which we
respectively combine hypotheses coming from independent ASR systems and
multi-microphone recordings. In both tasks, it is assumed that the ASR decoder
information is not available. The proposed approach significantly outperforms
standard ROVER and it is competitive with two strong oracles that e xploit
prior knowledge about the real quality of the hypotheses to be combined.
Compared to standard ROVER, the abs olute WER improvements in the two
evaluation scenarios range from 0.5% to 7.3%
Segmentation ART: A Neural Network for Word Recognition from Continuous Speech
The Segmentation ATIT (Adaptive Resonance Theory) network for word recognition from a continuous speech stream is introduced. An input sequeuce represents phonemes detected at a preproccesing stage. Segmentation ATIT is trained rapidly, and uses a fast-learning fuzzy ART modules, top-down expectation, and a spatial representation of temporal order. The network performs on-line identification of word boundaries, correcting an initial hypothesis if subsequent phonemes are incompatible with a previous partition. Simulations show that the system's segmentation perfonnance is comparable to that of TRACE, and the ability to segment a number of difficult phrases is also demonstrated.National Science Foundation (NSF-IRI-94-01659); Office of Naval Research (N00014-95-1-0409, N00014-95-1-0G57
A convolutional neural-network model of human cochlear mechanics and filter tuning for real-time applications
Auditory models are commonly used as feature extractors for automatic
speech-recognition systems or as front-ends for robotics, machine-hearing and
hearing-aid applications. Although auditory models can capture the biophysical
and nonlinear properties of human hearing in great detail, these biophysical
models are computationally expensive and cannot be used in real-time
applications. We present a hybrid approach where convolutional neural networks
are combined with computational neuroscience to yield a real-time end-to-end
model for human cochlear mechanics, including level-dependent filter tuning
(CoNNear). The CoNNear model was trained on acoustic speech material and its
performance and applicability were evaluated using (unseen) sound stimuli
commonly employed in cochlear mechanics research. The CoNNear model accurately
simulates human cochlear frequency selectivity and its dependence on sound
intensity, an essential quality for robust speech intelligibility at negative
speech-to-background-noise ratios. The CoNNear architecture is based on
parallel and differentiable computations and has the power to achieve real-time
human performance. These unique CoNNear features will enable the next
generation of human-like machine-hearing applications
- …