1,707 research outputs found

    Bayesian source separation with mixture of Gaussians prior for sources and Gaussian prior for mixture coefficients

    Get PDF
    In this contribution, we present new algorithms to source separation for the case of noisy instantaneous linear mixture, within the Bayesian statistical framework. The source distribution prior is modeled by a mixture of Gaussians [Moulines97] and the mixing matrix elements distributions by a Gaussian [Djafari99a]. We model the mixture of Gaussians hierarchically by mean of hidden variables representing the labels of the mixture. Then, we consider the joint a posteriori distribution of sources, mixing matrix elements, labels of the mixture and other parameters of the mixture with appropriate prior probability laws to eliminate degeneracy of the likelihood function of variance parameters and we propose two iterative algorithms to estimate jointly sources, mixing matrix and hyperparameters: Joint MAP (Maximum a posteriori) algorithm and penalized EM algorithm. The illustrative example is taken in [Macchi99] to compare with other algorithms proposed in literature. Keywords: Source separation, Gaussian mixture, classification, JMAP algorithm, Penalized EM algorithm.Comment: Presented at MaxEnt00. Appeared in Bayesian Inference and Maximum Entropy Methods, Ali Mohammad-Djafari(Ed.), AIP Proceedings (http://proceedings.aip.org/proceedings/confproceed/568.jsp

    Dimensionality reduction of clustered data sets

    Get PDF
    We present a novel probabilistic latent variable model to perform linear dimensionality reduction on data sets which contain clusters. We prove that the maximum likelihood solution of the model is an unsupervised generalisation of linear discriminant analysis. This provides a completely new approach to one of the most established and widely used classification algorithms. The performance of the model is then demonstrated on a number of real and artificial data sets

    Wavelet Domain Image Separation

    Full text link
    In this paper, we consider the problem of blind signal and image separation using a sparse representation of the images in the wavelet domain. We consider the problem in a Bayesian estimation framework using the fact that the distribution of the wavelet coefficients of real world images can naturally be modeled by an exponential power probability density function. The Bayesian approach which has been used with success in blind source separation gives also the possibility of including any prior information we may have on the mixing matrix elements as well as on the hyperparameters (parameters of the prior laws of the noise and the sources). We consider two cases: first the case where the wavelet coefficients are assumed to be i.i.d. and second the case where we model the correlation between the coefficients of two adjacent scales by a first order Markov chain. This paper only reports on the first case, the second case results will be reported in a near future. The estimation computations are done via a Monte Carlo Markov Chain (MCMC) procedure. Some simulations show the performances of the proposed method. Keywords: Blind source separation, wavelets, Bayesian estimation, MCMC Hasting-Metropolis algorithm.Comment: Presented at MaxEnt2002, the 22nd International Workshop on Bayesian and Maximum Entropy methods (Aug. 3-9, 2002, Moscow, Idaho, USA). To appear in Proceedings of American Institute of Physic

    Hierarchically Clustered Representation Learning

    Full text link
    The joint optimization of representation learning and clustering in the embedding space has experienced a breakthrough in recent years. In spite of the advance, clustering with representation learning has been limited to flat-level categories, which often involves cohesive clustering with a focus on instance relations. To overcome the limitations of flat clustering, we introduce hierarchically-clustered representation learning (HCRL), which simultaneously optimizes representation learning and hierarchical clustering in the embedding space. Compared with a few prior works, HCRL firstly attempts to consider a generation of deep embeddings from every component of the hierarchy, not just leaf components. In addition to obtaining hierarchically clustered embeddings, we can reconstruct data by the various abstraction levels, infer the intrinsic hierarchical structure, and learn the level-proportion features. We conducted evaluations with image and text domains, and our quantitative analyses showed competent likelihoods and the best accuracies compared with the baselines.Comment: 10 pages, 7 figures, Under review as a conference pape

    A Unifying review of linear gaussian models

    Get PDF
    Factor analysis, principal component analysis, mixtures of gaussian clusters, vector quantization, Kalman filter models, and hidden Markov models can all be unified as variations of unsupervised learning under a single basic generative model. This is achieved by collecting together disparate observations and derivations made by many previous authors and introducing a new way of linking discrete and continuous state models using a simple nonlinearity. Through the use of other nonlinearities, we show how independent component analysis is also a variation of the same basic generative model.We show that factor analysis and mixtures of gaussians can be implemented in autoencoder neural networks and learned using squared error plus the same regularization term. We introduce a new model for static data, known as sensible principal component analysis, as well as a novel concept of spatially adaptive observation noise. We also review some of the literature involving global and local mixtures of the basic models and provide pseudocode for inference and learning for all the basic models
    • …
    corecore