13 research outputs found

    On the use of i-vector posterior distributions in Probabilistic Linear Discriminant Analysis

    Get PDF
    The i-vector extraction process is affected by several factors such as the noise level, the acoustic content of the observed features, the channel mismatch between the training conditions and the test data, and the duration of the analyzed speech segment. These factors influence both the i-vector estimate and its uncertainty, represented by the i-vector posterior covariance. This paper presents a new PLDA model that, unlike the standard one, exploits the intrinsic i-vector uncertainty. Since the recognition accuracy is known to decrease for short speech segments, and their length is one of the main factors affecting the i-vector covariance, we designed a set of experiments aiming at comparing the standard and the new PLDA models on short speech cuts of variable duration, randomly extracted from the conversations included in the NIST SRE 2010 extended dataset, both from interviews and telephone conversations. Our results on NIST SRE 2010 evaluation data show that in different conditions the new model outperforms the standard PLDA by more than 10% relative when tested on short segments with duration mismatches, and is able to keep the accuracy of the standard model for long enough speaker segments. This technique has also been successfully tested in the NIST SRE 2012 evaluation

    Pairwise Discriminative Speaker Verification in the I-Vector Space

    Get PDF
    This work presents a new and efficient approach to discriminative speaker verification in the i-vector space. We illustrate the development of a linear discriminative classifier that is trained to discriminate between the hypothesis that a pair of feature vectors in a trial belong to the same speaker or to different speakers. This approach is alternative to the usual discriminative setup that discriminates between a speaker and all the other speakers. We use a discriminative classifier based on a Support Vector Machine (SVM) that is trained to estimate the parameters of a symmetric quadratic function approximating a log-likelihood ratio score without explicit modeling of the i-vector distributions as in the generative Probabilistic Linear Discriminant Analysis (PLDA) models. Training these models is feasible because it is not necessary to expand the i-vector pairs, which would be expensive or even impossible even for medium sized training sets. The results of experiments performed on the tel-tel extended core condition of the NIST 2010 Speaker Recognition Evaluation are competitive with the ones obtained by generative models, in terms of normalized Detection Cost Function and Equal Error Rate. Moreover, we show that it is possible to train a gender- independent discriminative model that achieves state-of-the-art accuracy, comparable to the one of a gender-dependent system, saving memory and execution time both in training and in testin

    PROBABILISTIC LINEAR DISCRIMINANT ANALYSIS OF I–VECTOR POSTERIOR DISTRIBUTIONS

    No full text
    The i-vector extraction process is affected by several factors such as the noise level, the acoustic content of the observed features, and the duration of the analyzed speech segment. These factors influence both the i–vector estimate and its uncertainty, represented by the i– vector posterior covariance. This paper present a new PLDA model that, unlike the standard one, exploits the intrinsic i–vector uncertainty. Since short segments are known to decrease recognition accuracy, and segment duration is the main factor affecting the i–vector covariance, we designed a set of experiments aiming at comparing the standard and the new PLDA models on short speech cuts of variable duration, randomly extracted from the conversations included in the NIST SRE 2010 female telephone extended core condition. Our results show that the new model outperforms the standard PLDA when tested on short segments, and keeps the accuracy of the latter for long enough utterances. In particular, the relative improvement is up to 13% for the EER, 5% for DCF08, and 2.5% for DCF10

    PROBABILISTIC LINEAR DISCRIMINANT ANALYSIS OF I-VECTOR POSTERIOR DISTRIBUTIONS

    No full text
    The i-vector extraction process is affected by several factors such as the noise level, the acoustic content of the observed features, and the duration of the analyzed speech segment. These factors influence both the i-vector estimate and its uncertainty, represented by the i- vector posterior covariance. This paper present a new PLDA model that, unlike the standard one, exploits the intrinsic i-vector uncertainty. Since short segments are known to decrease recognition accuracy, and segment duration is the main factor affecting the i-vector covariance, we designed a set of experiments aiming at comparing the standard and the new PLDA models on short speech cuts of variable duration, randomly extracted from the conversations included in the NIST SRE 2010 female telephone extended core condition. Our results show that the new model outperforms the standard PLDA when tested on short segments, and keeps the accuracy of the latter for long enough utterances. In particular, the relative improvement is up to 13% for the EER, 5% for DCF08, and 2.5% for DCF10

    Independent Component Analysis and MLLR Transforms for Speaker Identification

    No full text
    In this paper, we explore the use of Independent Component Analysis (ICA) and Principal Component Analysis (PCA) techniques to reduce the dimensionality of high-level LVCSR features and at the same time to enable modelling them with state-of-the-art techniques like Probabilistic Linear Discriminant Analysis or Pairwise Support Vector Machines (PSVM). The high-level features are the coefficients from Constrained Maximum-Likelihood Linear Regression (CMLLR) and Maximum-Likelihood Linear Regression (MLLR) transforms estimated in an Automatic Speech Recognition (ASR) system. We also compare a classical approach of modeling every speaker by a single SVM classifier with the recent state-of-the-art modelling techniques in Speaker Identification. We report performance of the systems and score-level combination with a current state-of-the-art acoustic i-vector system on the NIST SRE2010 dataset

    Regularized subspace n-gram model for phonotactic iVector extraction

    No full text
    Phonotactic language identification (LID) by means of n-gram statistics and discriminative classifiers is a popular approach for the LID problem. Low-dimensional representation of the n-gram statistics leads to the use of more diverse and efficient machine learning techniques in the LID. Recently, we proposed phototactic iVector as a low-dimensional representation of the n-gram statistics. In this work, an enhanced modeling of the n-gram probabilities along with regularized parameter estimation is proposed. The proposed model consistently improves the LID system performance over all conditions up to 15% relative to the previous state of the art system. The new model also alleviates memory requirement of the iVector extraction and helps to speed up subspace training. Results are presented in terms of Cavg over NIST LRE2009 evaluation set

    Regularized subspace n-gram model for phonotactic iVector extraction

    No full text
    Phonotactic language identification (LID) by means of n-gram statistics and discriminative classifiers is a popular approach for the LID problem. Low-dimensional representation of the n-gram statistics leads to the use of more diverse and efficient machine learning techniques in the LID. Recently, we proposed phototactic iVector as a low-dimensional representation of the n-gram statistics. In this work, an enhanced modeling of the n-gram probabilities along with regularized parameter estimation is proposed. The proposed model consistently improves the LID system performance over all conditions up to 15% relative to the previous state of the art system. The new model also alleviates memory requirement of the iVector extraction and helps to speed up subspace training. Results are presented in terms of Cavg over NIST LRE2009 evaluation set
    corecore