Self-supervised models for speech processing emerged recently as popular
foundation blocks in speech processing pipelines. These models are pre-trained
on unlabeled audio data and then used in speech processing downstream tasks
such as automatic speech recognition (ASR) or speech translation (ST). Since
these models are now used in research and industrial systems alike, it becomes
necessary to understand the impact caused by some features such as gender
distribution within pre-training data. Using French as our investigation
language, we train and compare gender-specific wav2vec 2.0 models against
models containing different degrees of gender balance in their pre-training
data. The comparison is performed by applying these models to two
speech-to-text downstream tasks: ASR and ST. Our results show that the type of
downstream integration matters. We observe lower overall performance using
gender-specific pre-training before fine-tuning an end-to-end ASR system.
However, when self-supervised models are used as feature extractors, the
overall ASR and ST results follow more complex patterns, in which the balanced
pre-trained model is not necessarily the best option. Lastly, our crude
'fairness' metric, the relative performance difference measured between female
and male test sets, does not display a strong variation from balanced to
gender-specific pre-trained wav2vec 2.0 models.Comment: submitted to INTERSPEECH 202