943 research outputs found
Very Deep Convolutional Neural Networks for Robust Speech Recognition
This paper describes the extension and optimization of our previous work on
very deep convolutional neural networks (CNNs) for effective recognition of
noisy speech in the Aurora 4 task. The appropriate number of convolutional
layers, the sizes of the filters, pooling operations and input feature maps are
all modified: the filter and pooling sizes are reduced and dimensions of input
feature maps are extended to allow adding more convolutional layers.
Furthermore appropriate input padding and input feature map selection
strategies are developed. In addition, an adaptation framework using joint
training of very deep CNN with auxiliary features i-vector and fMLLR features
is developed. These modifications give substantial word error rate reductions
over the standard CNN used as baseline. Finally the very deep CNN is combined
with an LSTM-RNN acoustic model and it is shown that state-level weighted log
likelihood score combination in a joint acoustic model decoding scheme is very
effective. On the Aurora 4 task, the very deep CNN achieves a WER of 8.81%,
further 7.99% with auxiliary feature joint training, and 7.09% with LSTM-RNN
joint decoding.Comment: accepted by SLT 201
Label-Synchronous Neural Transducer for Adaptable Online E2E Speech Recognition
Although end-to-end (E2E) automatic speech recognition (ASR) has shown
state-of-the-art recognition accuracy, it tends to be implicitly biased towards
the training data distribution which can degrade generalisation. This paper
proposes a label-synchronous neural transducer (LS-Transducer), which provides
a natural approach to domain adaptation based on text-only data. The
LS-Transducer extracts a label-level encoder representation before combining it
with the prediction network output. Since blank tokens are no longer needed,
the prediction network performs as a standard language model, which can be
easily adapted using text-only data. An Auto-regressive Integrate-and-Fire
(AIF) mechanism is proposed to generate the label-level encoder representation
while retaining low latency operation that can be used for streaming. In
addition, a streaming joint decoding method is designed to improve ASR accuracy
while retaining synchronisation with AIF. Experiments show that compared to
standard neural transducers, the proposed LS-Transducer gave a 12.9% relative
WER reduction (WERR) for intra-domain LibriSpeech data, as well as 21.4% and
24.6% relative WERRs on cross-domain TED-LIUM 2 and AESRC2020 data with an
adapted prediction network.Comment: This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessibl
Label-Synchronous Neural Transducer for End-to-End ASR
Neural transducers provide a natural approach to streaming ASR. However, they
augment output sequences with blank tokens which leads to challenges for domain
adaptation using text data. This paper proposes a label-synchronous neural
transducer (LS-Transducer), which extracts a label-level encoder representation
before combining it with the prediction network output. Hence blank tokens are
no longer needed and the prediction network can be easily adapted using text
data. An Auto-regressive Integrate-and-Fire (AIF) mechanism is proposed to
generate the label-level encoder representation while retaining the streaming
property. In addition, a streaming joint decoding method is designed to improve
ASR accuracy. Experiments show that compared to standard neural transducers,
the proposed LS-Transducer gave a 10% relative WER reduction (WERR) for
intra-domain Librispeech-100h data, as well as 17% and 19% relative WERRs on
cross-domain TED-LIUM 2 and AESRC2020 data with an adapted prediction network
Integrating Emotion Recognition with Speech Recognition and Speaker Diarisation for Conversations
Although automatic emotion recognition (AER) has recently drawn significant
research interest, most current AER studies use manually segmented utterances,
which are usually unavailable for dialogue systems. This paper proposes
integrating AER with automatic speech recognition (ASR) and speaker diarisation
(SD) in a jointly-trained system. Distinct output layers are built for four
sub-tasks including AER, ASR, voice activity detection and speaker
classification based on a shared encoder. Taking the audio of a conversation as
input, the integrated system finds all speech segments and transcribes the
corresponding emotion classes, word sequences, and speaker identities. Two
metrics are proposed to evaluate AER performance with automatic segmentation
based on time-weighted emotion and speaker classification errors. Results on
the IEMOCAP dataset show that the proposed system consistently outperforms two
baselines with separately trained single-task systems on AER, ASR and SD.Comment: Interspeech 202
- …