6 research outputs found

    Generative Modelling for Unsupervised Score Calibration

    Full text link
    Score calibration enables automatic speaker recognizers to make cost-effective accept / reject decisions. Traditional calibration requires supervised data, which is an expensive resource. We propose a 2-component GMM for unsupervised calibration and demonstrate good performance relative to a supervised baseline on NIST SRE'10 and SRE'12. A Bayesian analysis demonstrates that the uncertainty associated with the unsupervised calibration parameter estimates is surprisingly small.Comment: Accepted for ICASSP 201

    A comparison of linear and non-linear calibrations for speaker recognition

    Get PDF
    In recent work on both generative and discriminative score to log-likelihood-ratio calibration, it was shown that linear transforms give good accuracy only for a limited range of operating points. Moreover, these methods required tailoring of the calibration training objective functions in order to target the desired region of best accuracy. Here, we generalize the linear recipes to non-linear ones. We experiment with a non-linear, non-parametric, discriminative PAV solution, as well as parametric, generative, maximum-likelihood solutions that use Gaussian, Student's T and normal-inverse-Gaussian score distributions. Experiments on NIST SRE'12 scores suggest that the non-linear methods provide wider ranges of optimal accuracy and can be trained without having to resort to objective function tailoring.Comment: accepted for Odyssey 2014: The Speaker and Language Recognition Worksho

    Performance of likelihood ratios considering bounds on the probability of observing misleading evidence

    Full text link
    This is a pre-copyedited, author-produced version of an article accepted for publication in Law, Probability & Risk following peer review. The version of record Jose Juan Lucena-Molina, Daniel Ramos-Castro, Joaquin Gonzalez-Rodriguez; Performance of likelihood ratios considering bounds on the probability of observing misleading evidence. Law, Probability and Risk 2015; 14 (3): 175-192 is available online at: http://dx.doi.org/10.1093/lpr/mgu022In this article, we introduce a new tool, namely 'Limit Tippett Plots', to assess the performance of likelihood ratios in evidence evaluation including theoretical bounds on the probability of observing misleading evidence. To do that, we first review previous work about such bounds. Then we derive 'Limit Tippett Plots' that complements Tippett plots with information about the limits on the probability of observing misleading evidence, which are taken as a reference. Thus, a much richer way to measure performance of likelihood ratios is given. Finally, we present an experimental example in forensic automatic speaker recognition following the protocols of the Acoustics Laboratory of Guardia Civil, where it can be seen that 'Limit Tippett Plots' help to detect problems in the calculation of likelihood ratios

    EVALUATION OF SCIENTIFIC EVIDENCE : A PROPOSAL ON ONTOLOGICAL AND EPISTEMOLOGICAL BASES, AND SOME STATISTICAL APPLICATIONS

    Get PDF
    corecore