301,022 research outputs found

    Gaussian approximation of Gaussian scale mixture

    Get PDF
    For a given positive random variable V>0V>0 and a given ZN(0,1)Z\sim N(0,1) independent of VV, we compute the scalar t0t_0 such that the distance between ZVZ\sqrt{V} and Zt0Z\sqrt{t_0} in the L2(R)L^2(\R) sense, is minimal. We also consider the same problem in several dimensions when VV is a random positive definite matrix.Comment: 13 page

    Deep Gaussian Mixture Models

    Get PDF
    Deep learning is a hierarchical inference method formed by subsequent multiple layers of learning able to more efficiently describe complex relationships. In this work, Deep Gaussian Mixture Models are introduced and discussed. A Deep Gaussian Mixture model (DGMM) is a network of multiple layers of latent variables, where, at each layer, the variables follow a mixture of Gaussian distributions. Thus, the deep mixture model consists of a set of nested mixtures of linear models, which globally provide a nonlinear model able to describe the data in a very flexible way. In order to avoid overparameterized solutions, dimension reduction by factor models can be applied at each layer of the architecture thus resulting in deep mixtures of factor analysers.Comment: 19 pages, 4 figure

    Clustering student skill set profiles in a unit hypercube using mixtures of multivariate betas

    Get PDF
    <br>This paper presents a finite mixture of multivariate betas as a new model-based clustering method tailored to applications where the feature space is constrained to the unit hypercube. The mixture component densities are taken to be conditionally independent, univariate unimodal beta densities (from the subclass of reparameterized beta densities given by Bagnato and Punzo 2013). The EM algorithm used to fit this mixture is discussed in detail, and results from both this beta mixture model and the more standard Gaussian model-based clustering are presented for simulated skill mastery data from a common cognitive diagnosis model and for real data from the Assistment System online mathematics tutor (Feng et al 2009). The multivariate beta mixture appears to outperform the standard Gaussian model-based clustering approach, as would be expected on the constrained space. Fewer components are selected (by BIC-ICL) in the beta mixture than in the Gaussian mixture, and the resulting clusters seem more reasonable and interpretable.</br> <br>This article is in technical report form, the final publication is available at http://www.springerlink.com/openurl.asp?genre=article &id=doi:10.1007/s11634-013-0149-z</br&gt

    Classifying Exoplanets with Gaussian Mixture Model

    Full text link
    Recently, Odrzywolek and Rafelski (arXiv:1612.03556) have found three distinct categories of exoplanets, when they are classified based on density. We first carry out a similar classification of exoplanets according to their density using the Gaussian Mixture Model, followed by information theoretic criterion (AIC and BIC) to determine the optimum number of components. Such a one-dimensional classification favors two components using AIC and three using BIC, but the statistical significance from both the tests is not significant enough to decisively pick the best model between two and three components. We then extend this GMM-based classification to two dimensions by using both the density and the Earth similarity index (arXiv:1702.03678), which is a measure of how similar each planet is compared to the Earth. For this two-dimensional classification, both AIC and BIC provide decisive evidence in favor of three components.Comment: 8 pages, 7 figure
    corecore