3 research outputs found
Biased Self-supervised learning for ASR
Self-supervised learning via masked prediction pre-training (MPPT) has shown
impressive performance on a range of speech-processing tasks. This paper
proposes a method to bias self-supervised learning towards a specific task. The
core idea is to slightly finetune the model that is used to obtain the target
sequence. This leads to better performance and a substantial increase in
training speed. Furthermore, this paper proposes a variant of MPPT that allows
low-footprint streaming models to be trained effectively by computing the MPPT
loss on masked and unmasked frames. These approaches are evaluated for
automatic speech recognition on the Librispeech corpus, where 100 hours of data
served as the labelled data and 860 hours as the unlabelled data. The biased
training outperforms the unbiased training by 15.5% after 250k updates and
23.8% after 100k updates on test-other. For the streaming models, the
pre-training approach yields a reduction in word error rate of 44.1%.Comment: Submitted to ICASSP 202