To extract robust deep representations from long sequential modeling of
speech data, we propose a self-supervised learning approach, namely Contrastive
Separative Coding (CSC). Our key finding is to learn such representations by
separating the target signal from contrastive interfering signals. First, a
multi-task separative encoder is built to extract shared separable and
discriminative embedding; secondly, we propose a powerful cross-attention
mechanism performed over speaker representations across various interfering
conditions, allowing the model to focus on and globally aggregate the most
critical information to answer the "query" (current bottom-up embedding) while
paying less attention to interfering, noisy, or irrelevant parts; lastly, we
form a new probabilistic contrastive loss which estimates and maximizes the
mutual information between the representations and the global speaker vector.
While most prior unsupervised methods have focused on predicting the future,
neighboring, or missing samples, we take a different perspective of predicting
the interfered samples. Moreover, our contrastive separative loss is free from
negative sampling. The experiment demonstrates that our approach can learn
useful representations achieving a strong speaker verification performance in
adverse conditions.Comment: Accepted in ICASSP 202