121,232 research outputs found

    Assessment of maximum likelihood PCA missing data imputation

    Full text link
    Maximum likelihood principal component analysis (MLPCA) was originally proposed to incorporate measurement error variance information in principal component analysis (PCA) models. MLPCA can be used to fit PCA models in the presence of missing data, simply by assigning very large variances to the non-measured values. An assessment of maximum likelihood missing data imputation is performed in this paper, analysing the algorithm of MLPCA and adapting several methods for PCA model building with missing data to its maximum likelihood version. In this way, known data regression (KDR), KDR with principal component regression (PCR), KDR with partial least squares regression (PLS) and trimmed scores regression (TSR) methods are implemented within the MLPCA method to work as different imputation steps. Six data sets are analysed using several percentages of missing data, comparing the performance of the original algorithm, and its adapted regression-based methods, with other state-of-the-art methods.Research in this study was partially supported by the Spanish Ministry of Science and Innovation and FEDER funds from the European Union through grant DPI2011-28112-C04-02 and DPI2014-55276-C5-1R, and the Spanish Ministry of Economy and Competitiveness through grant ECO2013-43353-R.Folch Fortuny, A.; Arteaga Moreno, FJ.; Ferrer, A. (2016). Assessment of maximum likelihood PCA missing data imputation. Journal of Chemometrics. 30(7):386-393. https://doi.org/10.1002/cem.280438639330

    Advanced Probabilistic Models for Clustering and Projection

    Get PDF
    Probabilistic modeling for data mining and machine learning problems is a fundamental research area. The general approach is to assume a generative model underlying the observed data, and estimate model parameters via likelihood maximization. It has the deep probability theory as the mathematical background, and enjoys a large amount of methods from statistical learning, sampling theory and Bayesian statistics. In this thesis we study several advanced probabilistic models for data clustering and feature projection, which are the two important unsupervised learning problems. The goal of clustering is to group similar data points together to uncover the data clusters. While numerous methods exist for various clustering tasks, one important question still remains, i.e., how to automatically determine the number of clusters. The first part of the thesis answers this question from a mixture modeling perspective. A finite mixture model is first introduced for clustering, in which each mixture component is assumed to be an exponential family distribution for generality. The model is then extended to an infinite mixture model, and its strong connection to Dirichlet process (DP) is uncovered which is a non-parametric Bayesian framework. A variational Bayesian algorithm called VBDMA is derived from this new insight to learn the number of clusters automatically, and empirical studies on some 2D data sets and an image data set verify the effectiveness of this algorithm. In feature projection, we are interested in dimensionality reduction and aim to find a low-dimensional feature representation for the data. We first review the well-known principal component analysis (PCA) and its probabilistic interpretation (PPCA), and then generalize PPCA to a novel probabilistic model which is able to handle non-linear projection known as kernel PCA. An expectation-maximization (EM) algorithm is derived for kernel PCA such that it is fast and applicable to large data sets. Then we propose a novel supervised projection method called MORP, which can take the output information into account in a supervised learning context. Empirical studies on various data sets show much better results compared to unsupervised projection and other supervised projection methods. At the end we generalize MORP probabilistically to propose SPPCA for supervised projection, and we can also naturally extend the model to S2PPCA which is a semi-supervised projection method. This allows us to incorporate both the label information and the unlabeled data into the projection process. In the third part of the thesis, we introduce a unified probabilistic model which can handle data clustering and feature projection jointly. The model can be viewed as a clustering model with projected features, and a projection model with structured documents. A variational Bayesian learning algorithm can be derived, and it turns out to iterate the clustering operations and projection operations until convergence. Superior performance can be obtained for both clustering and projection

    PCA Reduced Gaussian Mixture Models with Applications in Superresolution

    Get PDF
    Despite the rapid development of computational hardware, the treatment of largeand high dimensional data sets is still a challenging problem. This paper providesa twofold contribution to the topic. First, we propose a Gaussian Mixture Model inconjunction with a reduction of the dimensionality of the data in each componentof the model by principal component analysis, called PCA-GMM. To learn the (lowdimensional) parameters of the mixture model we propose an EM algorithm whoseM-step requires the solution of constrained optimization problems. Fortunately,these constrained problems do not depend on the usually large number of samplesand can be solved efficiently by an (inertial) proximal alternating linearized mini-mization algorithm. Second, we apply our PCA-GMM for the superresolution of 2Dand 3D material images based on the approach of Sandeep and Jacob. Numericalresults confirm the moderate influence of the dimensionality reduction on the overallsuperresolution result.Super-résolution d'images multi-échelles en sciences des matériaux avec des attributs géométrique

    Nonparametric causal discovery with applications to cancer bioinformatics

    Full text link
    Many natural phenomena are intrinsically causal. The discovery of the cause-effect relationships implicit in these processes can help us to understand and describe them more effectively, which boils down to causal discovery about the data and variables that describe them. However, causal discovery is not an easy task. Current methods for this are extremely complex and costly, and their usefulness is strongly compromised in contexts with large amounts of data or where the nature of the variables involved is unknown. As an alternative, this paper presents an original methodology for causal discovery, built on essential aspects of the main theories of causality, in particular probabilistic causality, with many meeting points with the inferential approach of regularity theories and others. Based on this methodology, a non-parametric algorithm is developed for the discovery of causal relationships between binary variables associated to data sets, and the modeling in graphs of the causal networks they describe. This algorithm is applied to gene expression data sets in normal and cancerous prostate tissues, with the aim of discovering cause-effect relationships between gene dysregulations leading to carcinogenesis. The gene characterizations constructed from the causal relationships discovered are compared with another study based on principal component analysis (PCA) on the same data, with satisfactory results.Comment: Diploma Thesis in Computer Science. In spanish. Supervised by Drs Gabriel Gil and Augusto Gonzalez. 74 pages, 11 figures, 12 table
    • …
    corecore