1,243,202 research outputs found

    Deep Gaussian Mixture Models

    Get PDF
    Deep learning is a hierarchical inference method formed by subsequent multiple layers of learning able to more efficiently describe complex relationships. In this work, Deep Gaussian Mixture Models are introduced and discussed. A Deep Gaussian Mixture model (DGMM) is a network of multiple layers of latent variables, where, at each layer, the variables follow a mixture of Gaussian distributions. Thus, the deep mixture model consists of a set of nested mixtures of linear models, which globally provide a nonlinear model able to describe the data in a very flexible way. In order to avoid overparameterized solutions, dimension reduction by factor models can be applied at each layer of the architecture thus resulting in deep mixtures of factor analysers.Comment: 19 pages, 4 figure

    Multidimensional Membership Mixture Models

    Full text link
    We present the multidimensional membership mixture (M3) models where every dimension of the membership represents an independent mixture model and each data point is generated from the selected mixture components jointly. This is helpful when the data has a certain shared structure. For example, three unique means and three unique variances can effectively form a Gaussian mixture model with nine components, while requiring only six parameters to fully describe it. In this paper, we present three instantiations of M3 models (together with the learning and inference algorithms): infinite, finite, and hybrid, depending on whether the number of mixtures is fixed or not. They are built upon Dirichlet process mixture models, latent Dirichlet allocation, and a combination respectively. We then consider two applications: topic modeling and learning 3D object arrangements. Our experiments show that our M3 models achieve better performance using fewer topics than many classic topic models. We also observe that topics from the different dimensions of M3 models are meaningful and orthogonal to each other.Comment: 9 pages, 7 figure

    Local mixture models of exponential families

    Get PDF
    Exponential families are the workhorses of parametric modelling theory. One reason for their popularity is their associated inference theory, which is very clean, both from a theoretical and a computational point of view. One way in which this set of tools can be enriched in a natural and interpretable way is through mixing. This paper develops and applies the idea of local mixture modelling to exponential families. It shows that the highly interpretable and flexible models which result have enough structure to retain the attractive inferential properties of exponential families. In particular, results on identification, parameter orthogonality and log-concavity of the likelihood are proved.Comment: Published at http://dx.doi.org/10.3150/07-BEJ6170 in the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    When Do Phylogenetic Mixture Models Mimic Other Phylogenetic Models?

    Full text link
    Phylogenetic mixture models, in which the sites in sequences undergo different substitution processes along the same or different trees, allow the description of heterogeneous evolutionary processes. As data sets consisting of longer sequences become available, it is important to understand such models, for both theoretical insights and use in statistical analyses. Some recent articles have highlighted disturbing "mimicking" behavior in which a distribution from a mixture model is identical to one arising on a different tree or trees. Other works have indicated such problems are unlikely to occur in practice, as they require very special parameter choices. After surveying some of these works on mixture models, we give several new results. In general, if the number of components in a generating mixture is not too large and we disallow zero or infinite branch lengths, then it cannot mimic the behavior of a non-mixture on a different tree. On the other hand, if the mixture model is locally over-parameterized, it is possible for a phylogenetic mixture model to mimic distributions of another tree model. Though theoretical questions remain, these sorts of results can serve as a guide to when the use of mixture models in either ML or Bayesian frameworks is likely to lead to statistically consistent inference, and when mimicking due to heterogeneity should be considered a realistic possibility.Comment: 21 pages, 1 figure; revised to expand commentary; Mittag-Leffler Institute, Spring 201

    Identifiability of multivariate logistic mixture models

    Full text link
    Mixture models have been widely used in modeling of continuous observations. For the possibility to estimate the parameters of a mixture model consistently on the basis of observations from the mixture, identifiability is a necessary condition. In this study, we give some results on the identifiability of multivariate logistic mixture models

    Relabelling Algorithms for Large Dataset Mixture Models

    Full text link
    Mixture models are flexible tools in density estimation and classification problems. Bayesian estimation of such models typically relies on sampling from the posterior distribution using Markov chain Monte Carlo. Label switching arises because the posterior is invariant to permutations of the component parameters. Methods for dealing with label switching have been studied fairly extensively in the literature, with the most popular approaches being those based on loss functions. However, many of these algorithms turn out to be too slow in practice, and can be infeasible as the size and dimension of the data grow. In this article, we review earlier solutions which can scale up well for large data sets, and compare their performances on simulated and real datasets. In addition, we propose a new, and computationally efficient algorithm based on a loss function interpretation, and show that it can scale up well in larger problems. We conclude with some discussions and recommendations of all the methods studied

    Mixture Models and Convergence Clubs

    Get PDF
    In this paper we argue that modeling the cross-country distribution of per capita income as a mixture distribution provides a natural framework for the detection of convergence clubs. The framework yields tests for the number of component distributions that are likely to have more power than "bump hunting" tests and includes a natural method of assessing the cross-component immobility necessary to imply a correspondence between components and convergence clubs. Applying the mixture approach to cross-country per capita income data for the period 1960 to 2000 we find evidence of three component densities in each of the nine years that we examine. We find little cross-component mobility and so interpret the multiple mixture components as representing convergence clubs. We document a pronounced tendency for the strength of the bonds between countries and clubs to increase. We show that the well-known "hollowing out" of the middle of the distribution is largely attributable to the increased concentration of the rich countries around their component means. This increased concentration as well as that of the poor countries around their component mean produces a rise in polarization in the distribution over the sample period.
    corecore