291 research outputs found
End-to-end neural segmental models for speech recognition
Segmental models are an alternative to frame-based models for sequence
prediction, where hypothesized path weights are based on entire segment scores
rather than a single frame at a time. Neural segmental models are segmental
models that use neural network-based weight functions. Neural segmental models
have achieved competitive results for speech recognition, and their end-to-end
training has been explored in several studies. In this work, we review neural
segmental models, which can be viewed as consisting of a neural network-based
acoustic encoder and a finite-state transducer decoder. We study end-to-end
segmental models with different weight functions, including ones based on
frame-level neural classifiers and on segmental recurrent neural networks. We
study how reducing the search space size impacts performance under different
weight functions. We also compare several loss functions for end-to-end
training. Finally, we explore training approaches, including multi-stage vs.
end-to-end training and multitask training that combines segmental and
frame-level losses
Equivalence of Segmental and Neural Transducer Modeling: A Proof of Concept
With the advent of direct models in automatic speech recognition (ASR), the
formerly prevalent frame-wise acoustic modeling based on hidden Markov models
(HMM) diversified into a number of modeling architectures like encoder-decoder
attention models, transducer models and segmental models (direct HMM). While
transducer models stay with a frame-level model definition, segmental models
are defined on the level of label segments directly. While
(soft-)attention-based models avoid explicit alignment, transducer and
segmental approach internally do model alignment, either by segment hypotheses
or, more implicitly, by emitting so-called blank symbols. In this work, we
prove that the widely used class of RNN-Transducer models and segmental models
(direct HMM) are equivalent and therefore show equal modeling power. It is
shown that blank probabilities translate into segment length probabilities and
vice versa. In addition, we provide initial experiments investigating decoding
and beam-pruning, comparing time-synchronous and label-/segment-synchronous
search strategies and their properties using the same underlying model.Comment: accepted at Interspeech202
RASR2: The RWTH ASR Toolkit for Generic Sequence-to-sequence Speech Recognition
Modern public ASR tools usually provide rich support for training various
sequence-to-sequence (S2S) models, but rather simple support for decoding
open-vocabulary scenarios only. For closed-vocabulary scenarios, public tools
supporting lexical-constrained decoding are usually only for classical ASR, or
do not support all S2S models. To eliminate this restriction on research
possibilities such as modeling unit choice, we present RASR2 in this work, a
research-oriented generic S2S decoder implemented in C++. It offers a strong
flexibility/compatibility for various S2S models, language models, label
units/topologies and neural network architectures. It provides efficient
decoding for both open- and closed-vocabulary scenarios based on a generalized
search framework with rich support for different search modes and settings. We
evaluate RASR2 with a wide range of experiments on both switchboard and
Librispeech corpora. Our source code is public online.Comment: accepted at Interspeech 202
Multitask Learning with CTC and Segmental CRF for Speech Recognition
Segmental conditional random fields (SCRFs) and connectionist temporal
classification (CTC) are two sequence labeling methods used for end-to-end
training of speech recognition models. Both models define a transcription
probability by marginalizing decisions about latent segmentation alternatives
to derive a sequence probability: the former uses a globally normalized joint
model of segment labels and durations, and the latter classifies each frame as
either an output symbol or a "continuation" of the previous label. In this
paper, we train a recognition model by optimizing an interpolation between the
SCRF and CTC losses, where the same recurrent neural network (RNN) encoder is
used for feature extraction for both outputs. We find that this multitask
objective improves recognition accuracy when decoding with either the SCRF or
CTC models. Additionally, we show that CTC can also be used to pretrain the RNN
encoder, which improves the convergence rate when learning the joint model.Comment: 5 pages, 2 figures, camera ready version at Interspeech 201
- …