88,284 research outputs found

    A Constrained EM Algorithm for Independent Component Analysis

    Get PDF
    We introduce a novel way of performing independent component analysis using a constrained version of the expectation-maximization (EM) algorithm. The source distributions are modeled as D one-dimensional mixtures of gaussians. The observed data are modeled as linear mixtures of the sources with additive, isotropic noise. This generative model is fit to the data using constrained EM. The simpler “soft-switching” approach is introduced, which uses only one parameter to decide on the sub- or supergaussian nature of the sources. We explain how our approach relates to independent factor analysis

    Sequential Monte Carlo EM for multivariate probit models

    Full text link
    Multivariate probit models (MPM) have the appealing feature of capturing some of the dependence structure between the components of multidimensional binary responses. The key for the dependence modelling is the covariance matrix of an underlying latent multivariate Gaussian. Most approaches to MLE in multivariate probit regression rely on MCEM algorithms to avoid computationally intensive evaluations of multivariate normal orthant probabilities. As an alternative to the much used Gibbs sampler a new SMC sampler for truncated multivariate normals is proposed. The algorithm proceeds in two stages where samples are first drawn from truncated multivariate Student tt distributions and then further evolved towards a Gaussian. The sampler is then embedded in a MCEM algorithm. The sequential nature of SMC methods can be exploited to design a fully sequential version of the EM, where the samples are simply updated from one iteration to the next rather than resampled from scratch. Recycling the samples in this manner significantly reduces the computational cost. An alternative view of the standard conditional maximisation step provides the basis for an iterative procedure to fully perform the maximisation needed in the EM algorithm. The identifiability of MPM is also thoroughly discussed. In particular, the likelihood invariance can be embedded in the EM algorithm to ensure that constrained and unconstrained maximisation are equivalent. A simple iterative procedure is then derived for either maximisation which takes effectively no computational time. The method is validated by applying it to the widely analysed Six Cities dataset and on a higher dimensional simulated example. Previous approaches to the Six Cities overly restrict the parameter space but, by considering the correct invariance, the maximum likelihood is quite naturally improved when treating the full unrestricted model.Comment: 26 pages, 2 figures. In press, Computational Statistics & Data Analysi

    A data driven equivariant approach to constrained Gaussian mixture modeling

    Full text link
    Maximum likelihood estimation of Gaussian mixture models with different class-specific covariance matrices is known to be problematic. This is due to the unboundedness of the likelihood, together with the presence of spurious maximizers. Existing methods to bypass this obstacle are based on the fact that unboundedness is avoided if the eigenvalues of the covariance matrices are bounded away from zero. This can be done imposing some constraints on the covariance matrices, i.e. by incorporating a priori information on the covariance structure of the mixture components. The present work introduces a constrained equivariant approach, where the class conditional covariance matrices are shrunk towards a pre-specified matrix Psi. Data-driven choices of the matrix Psi, when a priori information is not available, and the optimal amount of shrinkage are investigated. The effectiveness of the proposal is evaluated on the basis of a simulation study and an empirical example

    Dealing with Label Switching in Mixture Models Under Genuine Multimodality

    Get PDF
    The fitting of finite mixture models is an ill-defined estimation problem as completely different parameterizations can induce similar mixture distributions. This leads to multiple modes in the likelihood which is a problem for frequentist maximum likelihood estimation, and complicates statistical inference of Markov chain Monte Carlo draws in Bayesian estimation. For the analysis of the posterior density of these draws a suitable separation into different modes is desirable. In addition, a unique labelling of the component specific estimates is necessary to solve the label switching problem. This paper presents and compares two approaches to achieve these goals: relabelling under multimodality and constrained clustering. The algorithmic details are discussed and their application is demonstrated on artificial and real-world data

    A Unifying review of linear gaussian models

    Get PDF
    Factor analysis, principal component analysis, mixtures of gaussian clusters, vector quantization, Kalman filter models, and hidden Markov models can all be unified as variations of unsupervised learning under a single basic generative model. This is achieved by collecting together disparate observations and derivations made by many previous authors and introducing a new way of linking discrete and continuous state models using a simple nonlinearity. Through the use of other nonlinearities, we show how independent component analysis is also a variation of the same basic generative model.We show that factor analysis and mixtures of gaussians can be implemented in autoencoder neural networks and learned using squared error plus the same regularization term. We introduce a new model for static data, known as sensible principal component analysis, as well as a novel concept of spatially adaptive observation noise. We also review some of the literature involving global and local mixtures of the basic models and provide pseudocode for inference and learning for all the basic models

    A general framework for online audio source separation

    Get PDF
    We consider the problem of online audio source separation. Existing algorithms adopt either a sliding block approach or a stochastic gradient approach, which is faster but less accurate. Also, they rely either on spatial cues or on spectral cues and cannot separate certain mixtures. In this paper, we design a general online audio source separation framework that combines both approaches and both types of cues. The model parameters are estimated in the Maximum Likelihood (ML) sense using a Generalised Expectation Maximisation (GEM) algorithm with multiplicative updates. The separation performance is evaluated as a function of the block size and the step size and compared to that of an offline algorithm.Comment: International conference on Latente Variable Analysis and Signal Separation (2012
    corecore