Self-supervised learning (SSL) achieves great success in speech recognition,
while limited exploration has been attempted for other speech processing tasks.
As speech signal contains multi-faceted information including speaker identity,
paralinguistics, spoken content, etc., learning universal representations for
all speech tasks is challenging. To tackle the problem, we propose a new
pre-trained model, WavLM, to solve full-stack downstream speech tasks. WavLM
jointly learns masked speech prediction and denoising in pre-training. By this
means, WavLM does not only keep the speech content modeling capability by the
masked speech prediction, but also improves the potential to non-ASR tasks by
the speech denoising. In addition, WavLM employs gated relative position bias
for the Transformer structure to better capture the sequence ordering of input
speech. We also scale up the training dataset from 60k hours to 94k hours.
WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and
brings significant improvements for various speech processing tasks on their
representative benchmarks. The code and pre-trained models are available at
https://aka.ms/wavlm.Comment: Submitted to the Journal of Selected Topics in Signal Processing
(JSTSP