606 research outputs found

    Generative Modelling for Unsupervised Score Calibration

    Full text link
    Score calibration enables automatic speaker recognizers to make cost-effective accept / reject decisions. Traditional calibration requires supervised data, which is an expensive resource. We propose a 2-component GMM for unsupervised calibration and demonstrate good performance relative to a supervised baseline on NIST SRE'10 and SRE'12. A Bayesian analysis demonstrates that the uncertainty associated with the unsupervised calibration parameter estimates is surprisingly small.Comment: Accepted for ICASSP 201

    Rectified Gaussian Scale Mixtures and the Sparse Non-Negative Least Squares Problem

    Full text link
    In this paper, we develop a Bayesian evidence maximization framework to solve the sparse non-negative least squares (S-NNLS) problem. We introduce a family of probability densities referred to as the Rectified Gaussian Scale Mixture (R- GSM) to model the sparsity enforcing prior distribution for the solution. The R-GSM prior encompasses a variety of heavy-tailed densities such as the rectified Laplacian and rectified Student- t distributions with a proper choice of the mixing density. We utilize the hierarchical representation induced by the R-GSM prior and develop an evidence maximization framework based on the Expectation-Maximization (EM) algorithm. Using the EM based method, we estimate the hyper-parameters and obtain a point estimate for the solution. We refer to the proposed method as rectified sparse Bayesian learning (R-SBL). We provide four R- SBL variants that offer a range of options for computational complexity and the quality of the E-step computation. These methods include the Markov chain Monte Carlo EM, linear minimum mean-square-error estimation, approximate message passing and a diagonal approximation. Using numerical experiments, we show that the proposed R-SBL method outperforms existing S-NNLS solvers in terms of both signal and support recovery performance, and is also very robust against the structure of the design matrix.Comment: Under Review by IEEE Transactions on Signal Processin

    A Bayesian predictive classification approach to robust speech recognition

    Get PDF
    We introduce a new Bayesian predictive classification (BPC) approach to robust speech recognition and apply the BPC framework to Gaussian mixture continuous density hidden Markov model based speech recognition. We propose and focus on one of the approximate BPC approaches called quasi-Bayesian predictive classification (QBPC). In comparison with the standard plug-in maximum a posteriori decoding, when the QBPC method is applied to speaker independent recognition of a confusable vocabulary namely 26 English letters, where a broad range of mismatches between training and testing conditions exist, the QBPC achieves around 14% relative recognition error rate reduction. While the QBPC method is applied to cross-gender testing on a less confusable vocabulary, namely 20 English digits and commands, the QBPC method achieves around 24% relative recognition error rate reduction.published_or_final_versio

    A Bayesian predictive classification approach to robust speech recognition

    Get PDF
    We introduce a new decision strategy called Bayesian predictive classification (BPC) for robust speech recognition where an unknown mismatch between the training and testing conditions exists. We then propose and focus on one of the approximate BPC approaches called quasi-Bayes predictive classification (QBPC). In a series of comparative experiments where the mismatch is caused by additive white Gaussian noise, we show that the proposed QBPC approach achieves a considerable improvement over the conventional plug-in MAP decision rule.published_or_final_versio

    Advances in Probabilistic Deep Learning

    Get PDF
    This thesis is concerned with methodological advances in probabilistic inference and their application to core challenges in machine perception and AI. Inferring a posterior distribution over the parameters of a model given some data is a central challenge that occurs in many fields ranging from finance and artificial intelligence to physics. Exact calculation is impossible in all but the simplest cases and a rich field of approximate inference has been developed to tackle this challenge. This thesis develops both an advance in approximate inference and an application of these methods to the problem of speech synthesis. In the first section of this thesis we develop a novel framework for constructing Markov Chain Monte Carlo (MCMC) kernels that can efficiently sample from high dimensional distributions such as the posteriors, that frequently occur in machine perception. We provide a specific instance of this framework and demonstrate that it can match or exceed the performance of Hamiltonian Monte Carlo without requiring gradients of the target distribution. In the second section of the thesis we focus on the application of approximate inference techniques to the task of synthesising human speech from text. By using advances in neural variational inference we are able to construct a state of the art speech synthesis system in which it is possible to control aspects of prosody such as emotional expression from significantly less supervised data than previously existing state of the art methods

    Structure Learning in Audio

    Get PDF
    • …
    corecore