368 research outputs found
Speaker verification using sequence discriminant support vector machines
This paper presents a text-independent speaker verification system using support vector machines (SVMs) with score-space kernels. Score-space kernels generalize Fisher kernels and are based on underlying generative models such as Gaussian mixture models (GMMs). This approach provides direct discrimination between whole sequences, in contrast with the frame-level approaches at the heart of most current systems. The resultant SVMs have a very high dimensionality since it is related to the number of parameters in the underlying generative model. To address problems that arise in the resultant optimization we introduce a technique called spherical normalization that preconditions the Hessian matrix. We have performed speaker verification experiments using the PolyVar database. The SVM system presented here reduces the relative error rates by 34% compared to a GMM likelihood ratio system
Discriminative Speaker Representation via Contrastive Learning with Class-Aware Attention in Angular Space
The challenges in applying contrastive learning to speaker verification (SV)
are that the softmax-based contrastive loss lacks discriminative power and that
the hard negative pairs can easily influence learning. To overcome the first
challenge, we propose a contrastive learning SV framework incorporating an
additive angular margin into the supervised contrastive loss in which the
margin improves the speaker representation's discrimination ability. For the
second challenge, we introduce a class-aware attention mechanism through which
hard negative samples contribute less significantly to the supervised
contrastive loss. We also employed gradient-based multi-objective optimization
to balance the classification and contrastive loss. Experimental results on
CN-Celeb and Voxceleb1 show that this new learning objective can cause the
encoder to find an embedding space that exhibits great speaker discrimination
across languages.Comment: Accepted by ICASSP 2023, 5 pages, 2 figure
- ā¦