5 research outputs found
Weakly-Supervised Speech Pre-training: A Case Study on Target Speech Recognition
Self-supervised learning (SSL) based speech pre-training has attracted much
attention for its capability of extracting rich representations learned from
massive unlabeled data. On the other hand, the use of weakly-supervised data is
less explored for speech pre-training. To fill this gap, we propose a
weakly-supervised speech pre-training method based on speaker-aware speech
data. It adopts a similar training procedure to the widely-used masked speech
prediction based SSL framework, while incorporating additional target-speaker
enrollment information as an auxiliary input. In this way, the learned
representation is steered towards the target speaker even in the presence of
highly overlapping interference, allowing potential applications to tasks such
as target speech recognition. Our experiments on Libri2Mix and WSJ0-2mix
datasets show that the proposed model achieves significantly better ASR
performance compared to WavLM, the state-of-the-art SSL model with denoising
capability.Comment: Accepted by Interspeech; 5 pages, 1 figure, 3 table