The application of speech self-supervised learning (SSL) models has achieved
remarkable performance in speaker verification (SV). However, there is a
computational cost hurdle in employing them, which makes development and
deployment difficult. Several studies have simply compressed SSL models through
knowledge distillation (KD) without considering the target task. Consequently,
these methods could not extract SV-tailored features. This paper suggests
One-Step Knowledge Distillation and Fine-Tuning (OS-KDFT), which incorporates
KD and fine-tuning (FT). We optimize a student model for SV during KD training
to avert the distillation of inappropriate information for the SV. OS-KDFT
could downsize Wav2Vec 2.0 based ECAPA-TDNN size by approximately 76.2%, and
reduce the SSL model's inference time by 79% while presenting an EER of 0.98%.
The proposed OS-KDFT is validated across VoxCeleb1 and VoxCeleb2 datasets and
W2V2 and HuBERT SSL models. Experiments are available on our GitHub