24 research outputs found
Improving sequence-to-sequence speech recognition training with on-the-fly data augmentation
Sequence-to-Sequence (S2S) models recently started to show state-of-the-art
performance for automatic speech recognition (ASR). With these large and deep
models overfitting remains the largest problem, outweighing performance
improvements that can be obtained from better architectures. One solution to
the overfitting problem is increasing the amount of available training data and
the variety exhibited by the training data with the help of data augmentation.
In this paper we examine the influence of three data augmentation methods on
the performance of two S2S model architectures. One of the data augmentation
method comes from literature, while two other methods are our own development -
a time perturbation in the frequency domain and sub-sequence sampling. Our
experiments on Switchboard and Fisher data show state-of-the-art performance
for S2S models that are trained solely on the speech training data and do not
use additional text data.Comment: To appear in ICASSP 202
Relative Positional Encoding for Speech Recognition and Direct Translation
Transformer models are powerful sequence-to-sequence architectures that are
capable of directly mapping speech inputs to transcriptions or translations.
However, the mechanism for modeling positions in this model was tailored for
text modeling, and thus is less ideal for acoustic inputs. In this work, we
adapt the relative position encoding scheme to the Speech Transformer, where
the key addition is relative distance between input states in the
self-attention network. As a result, the network can better adapt to the
variable distributions present in speech data. Our experiments show that our
resulting model achieves the best recognition result on the Switchboard
benchmark in the non-augmentation condition, and the best published result in
the MuST-C speech translation benchmark. We also show that this model is able
to better utilize synthetic data than the Transformer, and adapts better to
variable sentence segmentation quality for speech translation.Comment: Submitted to Interspeech 202
Low-Latency Sequence-to-Sequence Speech Recognition and Translation by Partial Hypothesis Selection
Encoder-decoder models provide a generic architecture for
sequence-to-sequence tasks such as speech recognition and translation. While
offline systems are often evaluated on quality metrics like word error rates
(WER) and BLEU, latency is also a crucial factor in many practical use-cases.
We propose three latency reduction techniques for chunk-based incremental
inference and evaluate their efficiency in terms of accuracy-latency trade-off.
On the 300-hour How2 dataset, we reduce latency by 83% to 0.8 second by
sacrificing 1% WER (6% rel.) compared to offline transcription. Although our
experiments use the Transformer, the hypothesis selection strategies are
applicable to other encoder-decoder models. To avoid expensive re-computation,
we use a unidirectionally-attending encoder. After an adaptation procedure to
partial sequences, the unidirectional model performs on-par with the original
model. We further show that our approach is also applicable to low-latency
speech translation. On How2 English-Portuguese speech translation, we reduce
latency to 0.7 second (-84% rel.) while incurring a loss of 2.4 BLEU points (5%
rel.) compared to the offline system
Make More of Your Data: Minimal Effort Data Augmentation for Automatic Speech Recognition and Translation
Data augmentation is a technique to generate new training data based on
existing data. We evaluate the simple and cost-effective method of
concatenating the original data examples to build new training instances.
Continued training with such augmented data is able to improve off-the-shelf
Transformer and Conformer models that were optimized on the original data only.
We demonstrate considerable improvements on the LibriSpeech-960h test sets (WER
2.83 and 6.87 for test-clean and test-other), which carry over to models
combined with shallow fusion (WER 2.55 and 6.27). Our method of continued
training also leads to improvements of up to 0.9 WER on the ASR part of
CoVoST-2 for four non English languages, and we observe that the gains are
highly dependent on the size of the original training data. We compare
different concatenation strategies and found that our method does not need
speaker information to achieve its improvements. Finally, we demonstrate on two
datasets that our methods also works for speech translation tasks