628 research outputs found

    HeMPPCAT: Mixtures of Probabilistic Principal Component Analysers for Data with Heteroscedastic Noise

    Full text link
    Mixtures of probabilistic principal component analysis (MPPCA) is a well-known mixture model extension of principal component analysis (PCA). Similar to PCA, MPPCA assumes the data samples in each mixture contain homoscedastic noise. However, datasets with heterogeneous noise across samples are becoming increasingly common, as larger datasets are generated by collecting samples from several sources with varying noise profiles. The performance of MPPCA is suboptimal for data with heteroscedastic noise across samples. This paper proposes a heteroscedastic mixtures of probabilistic PCA technique (HeMPPCAT) that uses a generalized expectation-maximization (GEM) algorithm to jointly estimate the unknown underlying factors, means, and noise variances under a heteroscedastic noise setting. Simulation results illustrate the improved factor estimates and clustering accuracies of HeMPPCAT compared to MPPCA

    Manifold Parzen Windows

    Get PDF
    The similarity between objects is a fundamental element of many learning algorithms. Most non-parametric methods take this similarity to be fixed, but much recent work has shown the advantages of learning it, in particular to exploit the local invariances in the data or to capture the possibly non-linear manifold on which most of the data lies. We propose a new non-parametric kernel density estimation method which captures the local structure of an underlying manifold through the leading eigenvectors of regularized local covariance matrices. Experiments in density estimation show significant improvements with respect to Parzen density estimators. The density estimators can also be used within Bayes classifiers, yielding classification rates similar to SVMs and much superior to the Parzen classifier. La similarité entre objets est un élément fondamental de plusieurs algorithmes d'apprentissage. La plupart des méthodes non paramétriques supposent cette similarité constante, mais des travaux récents ont montré les avantages de les apprendre, en particulier pour exploiter les invariances locales dans les données ou pour capturer la variété possiblement non linéaire sur laquelle reposent la plupart des données. Nous proposons une nouvelle méthode d'estimation de densité à noyau non paramétrique qui capture la structure locale d'une variété sous-jacente en utilisant les vecteurs propres principaux de matrices de covariance locales régularisées. Les expériences d'estimation de densité montrent une amélioration significative sur les estimateurs de densité de Parzen. Les estimateurs de densité peuvent aussi être utilisés à l'intérieur de classificateurs de Bayes, menant à des taux de classification similaires à ceux des SVMs, et très supérieurs au classificateur de Parzen.density estimation, non-parametric models, manifold models, probabilistic classifiers, estimation de densité, modèles non paramétriques, modèles de variétés, classification probabiliste

    Enhanced load profiling for residential network customers

    Get PDF
    Anticipating load characteristics on low voltage circuits is an area of increased concern for Distribution Network Operators with uncertainty stemming primarily from the validity of domestic load profiles. Identifying customer behavior makeup on a LV feeder ascertains the thermal and voltage constraints imposed on the network infrastructure; modeling this highly dynamic behavior requires a means of accommodating noise incurred through variations in lifestyle and meteorological conditions. Increased penetration of distributed generation may further worsen this situation with the risk of reversed power flows on a network with no transformer automation. Smart Meter roll-out is opening up the previously obscured view of domestic electricity use by providing high resolution advance data; while in most cases this is provided historically, rather than real-time, it permits a level of detail that could not have previously been achieved. Generating a data driven profile of domestic energy use would add to the accuracy of the monitoring and configuration activities undertaken by DNOs at LV level and higher which would afford greater realism than static load profiles that are in existing use. In this paper, a linear Gaussian load profile is developed that allows stratification to a finer level of detail while preserving a deterministic representation

    Simultaneous clustering with mixtures of factor analysers

    Get PDF
    This work details the method of Simultaneous Model-based Clustering. It also presents an extension to this method by reformulating it as a model with a mixture of factor analysers. This allows for the technique, known as Simultaneous Model-Based Clustering with a Mixture of Factor Analysers, to be able to cluster high dimensional gene-expression data. A new table of allowable and non-allowable models is formulated, along with a parameter estimation scheme for one such allowable model. Several numerical procedures are tested and various datasets, both real and generated, are clustered. The results of clustering the Iris data find a 3 component VEV model to have the lowest misclassification rate with comparable BIC values to the best scoring model. The clustering of Genetic data was less successful, where the 2-component model could successfully uncover the healthy tissue, but partitioned the cancerous tissue in half

    Parsimonious Shifted Asymmetric Laplace Mixtures

    Full text link
    A family of parsimonious shifted asymmetric Laplace mixture models is introduced. We extend the mixture of factor analyzers model to the shifted asymmetric Laplace distribution. Imposing constraints on the constitute parts of the resulting decomposed component scale matrices leads to a family of parsimonious models. An explicit two-stage parameter estimation procedure is described, and the Bayesian information criterion and the integrated completed likelihood are compared for model selection. This novel family of models is applied to real data, where it is compared to its Gaussian analogue within clustering and classification paradigms

    Robust, fuzzy, and parsimonious clustering based on mixtures of Factor Analyzers

    Get PDF
    A clustering algorithm that combines the advantages of fuzzy clustering and robust statistical estimators is presented. It is based on mixtures of Factor Analyzers, endowed by the joint usage of trimming and the constrained estimation of scatter matrices, in a modified maximum likelihood approach. The algorithm generates a set of membership values, that are used to fuzzy partition the data set and to contribute to the robust estimates of the mixture parameters. The adoption of clusters modeled by Gaussian Factor Analysis allows for dimension reduction and for discovering local linear structures in the data. The new methodology has been shown to be resistant to different types of contamination, by applying it on artificial data. A brief discussion on the tuning parameters, such as the trimming level, the fuzzifier parameter, the number of clusters and the value of the scatter matrices constraint, has been developed, also with the help of some heuristic tools for their choice. Finally, a real data set has been analyzed, to show how intermediate membership values are estimated for observations lying at cluster overlap, while cluster cores are composed by observations that are assigned to a cluster in a crisp way.Ministerio de Economía y Competitividad grant MTM2017-86061-C2-1-P, y Consejería de Educación de la Junta de Castilla y León and FEDER grantVA005P17 y VA002G1
    corecore