1,519 research outputs found
Robust language recognition via adaptive language factor extraction
This paper presents a technique to adapt an acoustically based
language classifier to the background conditions and speaker
accents. This adaptation improves language classification on
a broad spectrum of TV broadcasts. The core of the system
consists of an iVector-based setup in which language and channel
variabilities are modeled separately. The subsequent language
classifier (the backend) operates on the language factors,
i.e. those features in the extracted iVectors that explain the observed
language variability. The proposed technique adapts the
language variability model to the background conditions and
to the speaker accents present in the audio. The effect of the
adaptation is evaluated on a 28 hours corpus composed of documentaries and monolingual as well as multilingual broadcast
news shows. Consistent improvements in the automatic identification
of Flemish (Belgian Dutch), English and French are demonstrated for all broadcast types
NPLDA: A Deep Neural PLDA Model for Speaker Verification
The state-of-art approach for speaker verification consists of a neural
network based embedding extractor along with a backend generative model such as
the Probabilistic Linear Discriminant Analysis (PLDA). In this work, we propose
a neural network approach for backend modeling in speaker recognition. The
likelihood ratio score of the generative PLDA model is posed as a
discriminative similarity function and the learnable parameters of the score
function are optimized using a verification cost. The proposed model, termed as
neural PLDA (NPLDA), is initialized using the generative PLDA model parameters.
The loss function for the NPLDA model is an approximation of the minimum
detection cost function (DCF). The speaker recognition experiments using the
NPLDA model are performed on the speaker verificiation task in the VOiCES
datasets as well as the SITW challenge dataset. In these experiments, the NPLDA
model optimized using the proposed loss function improves significantly over
the state-of-art PLDA based speaker verification system.Comment: Published in Odyssey 2020, the Speaker and Language Recognition
Workshop (VOiCES Special Session). Link to GitHub Implementation:
https://github.com/iiscleap/NeuralPlda. arXiv admin note: substantial text
overlap with arXiv:2001.0703
MCE 2018: The 1st Multi-target Speaker Detection and Identification Challenge Evaluation
The Multi-target Challenge aims to assess how well current speech technology
is able to determine whether or not a recorded utterance was spoken by one of a
large number of blacklisted speakers. It is a form of multi-target speaker
detection based on real-world telephone conversations. Data recordings are
generated from call center customer-agent conversations. The task is to measure
how accurately one can detect 1) whether a test recording is spoken by a
blacklisted speaker, and 2) which specific blacklisted speaker was talking.
This paper outlines the challenge and provides its baselines, results, and
discussions.Comment: http://mce.csail.mit.edu . arXiv admin note: text overlap with
arXiv:1807.0666
Cross-Lingual Speaker Verification with Domain-Balanced Hard Prototype Mining and Language-Dependent Score Normalization
In this paper we describe the top-scoring IDLab submission for the
text-independent task of the Short-duration Speaker Verification (SdSV)
Challenge 2020. The main difficulty of the challenge exists in the large degree
of varying phonetic overlap between the potentially cross-lingual trials, along
with the limited availability of in-domain DeepMine Farsi training data. We
introduce domain-balanced hard prototype mining to fine-tune the
state-of-the-art ECAPA-TDNN x-vector based speaker embedding extractor. The
sample mining technique efficiently exploits speaker distances between the
speaker prototypes of the popular AAM-softmax loss function to construct
challenging training batches that are balanced on the domain-level. To enhance
the scoring of cross-lingual trials, we propose a language-dependent s-norm
score normalization. The imposter cohort only contains data from the Farsi
target-domain which simulates the enrollment data always being Farsi. In case a
Gaussian-Backend language model detects the test speaker embedding to contain
English, a cross-language compensation offset determined on the AAM-softmax
speaker prototypes is subtracted from the maximum expected imposter mean score.
A fusion of five systems with minor topological tweaks resulted in a final
MinDCF and EER of 0.065 and 1.45% respectively on the SdSVC evaluation set.Comment: proceedings of INTERSPEECH 202
A Speaker Verification Backend with Robust Performance across Conditions
In this paper, we address the problem of speaker verification in conditions
unseen or unknown during development. A standard method for speaker
verification consists of extracting speaker embeddings with a deep neural
network and processing them through a backend composed of probabilistic linear
discriminant analysis (PLDA) and global logistic regression score calibration.
This method is known to result in systems that work poorly on conditions
different from those used to train the calibration model. We propose to modify
the standard backend, introducing an adaptive calibrator that uses duration and
other automatically extracted side-information to adapt to the conditions of
the inputs. The backend is trained discriminatively to optimize binary
cross-entropy. When trained on a number of diverse datasets that are labeled
only with respect to speaker, the proposed backend consistently and, in some
cases, dramatically improves calibration, compared to the standard PLDA
approach, on a number of held-out datasets, some of which are markedly
different from the training data. Discrimination performance is also
consistently improved. We show that joint training of the PLDA and the adaptive
calibrator is essential -- the same benefits cannot be achieved when freezing
PLDA and fine-tuning the calibrator. To our knowledge, the results in this
paper are the first evidence in the literature that it is possible to develop a
speaker verification system with robust out-of-the-box performance on a large
variety of conditions
- …