7,952 research outputs found
Robust EM algorithm for model-based curve clustering
Model-based clustering approaches concern the paradigm of exploratory data
analysis relying on the finite mixture model to automatically find a latent
structure governing observed data. They are one of the most popular and
successful approaches in cluster analysis. The mixture density estimation is
generally performed by maximizing the observed-data log-likelihood by using the
expectation-maximization (EM) algorithm. However, it is well-known that the EM
algorithm initialization is crucial. In addition, the standard EM algorithm
requires the number of clusters to be known a priori. Some solutions have been
provided in [31, 12] for model-based clustering with Gaussian mixture models
for multivariate data. In this paper we focus on model-based curve clustering
approaches, when the data are curves rather than vectorial data, based on
regression mixtures. We propose a new robust EM algorithm for clustering
curves. We extend the model-based clustering approach presented in [31] for
Gaussian mixture models, to the case of curve clustering by regression
mixtures, including polynomial regression mixtures as well as spline or
B-spline regressions mixtures. Our approach both handles the problem of
initialization and the one of choosing the optimal number of clusters as the EM
learning proceeds, rather than in a two-fold scheme. This is achieved by
optimizing a penalized log-likelihood criterion. A simulation study confirms
the potential benefit of the proposed algorithm in terms of robustness
regarding initialization and funding the actual number of clusters.Comment: In Proceedings of the 2013 International Joint Conference on Neural
Networks (IJCNN), 2013, Dallas, TX, US
Bayesian dimensionality reduction with PCA using penalized semi-integrated likelihood
We discuss the problem of estimating the number of principal components in
Principal Com- ponents Analysis (PCA). Despite of the importance of the problem
and the multitude of solutions proposed in the literature, it comes as a
surprise that there does not exist a coherent asymptotic framework which would
justify different approaches depending on the actual size of the data set. In
this paper we address this issue by presenting an approximate Bayesian approach
based on Laplace approximation and introducing a general method for building
the model selection criteria, called PEnalized SEmi-integrated Likelihood
(PESEL). Our general framework encompasses a variety of existing approaches
based on probabilistic models, like e.g. Bayesian Information Criterion for the
Probabilistic PCA (PPCA), and allows for construction of new criteria,
depending on the size of the data set at hand. Specifically, we define PESEL
when the number of variables substantially exceeds the number of observations.
We also report results of extensive simulation studies and real data analysis,
which illustrate good properties of our proposed criteria as compared to the
state-of- the-art methods and very recent proposals. Specifially, these
simulations show that PESEL based criteria can be quite robust against
deviations from the probabilistic model assumptions. Selected PESEL based
criteria for the estimation of the number of principal components are
implemented in R package varclust, which is available on github
(https://github.com/psobczyk/varclust).Comment: 31 pages, 7 figure
Discriminative variable selection for clustering with the sparse Fisher-EM algorithm
The interest in variable selection for clustering has increased recently due
to the growing need in clustering high-dimensional data. Variable selection
allows in particular to ease both the clustering and the interpretation of the
results. Existing approaches have demonstrated the efficiency of variable
selection for clustering but turn out to be either very time consuming or not
sparse enough in high-dimensional spaces. This work proposes to perform a
selection of the discriminative variables by introducing sparsity in the
loading matrix of the Fisher-EM algorithm. This clustering method has been
recently proposed for the simultaneous visualization and clustering of
high-dimensional data. It is based on a latent mixture model which fits the
data into a low-dimensional discriminative subspace. Three different approaches
are proposed in this work to introduce sparsity in the orientation matrix of
the discriminative subspace through -type penalizations. Experimental
comparisons with existing approaches on simulated and real-world data sets
demonstrate the interest of the proposed methodology. An application to the
segmentation of hyperspectral images of the planet Mars is also presented
Markov models for fMRI correlation structure: is brain functional connectivity small world, or decomposable into networks?
Correlations in the signal observed via functional Magnetic Resonance Imaging
(fMRI), are expected to reveal the interactions in the underlying neural
populations through hemodynamic response. In particular, they highlight
distributed set of mutually correlated regions that correspond to brain
networks related to different cognitive functions. Yet graph-theoretical
studies of neural connections give a different picture: that of a highly
integrated system with small-world properties: local clustering but with short
pathways across the complete structure. We examine the conditional independence
properties of the fMRI signal, i.e. its Markov structure, to find realistic
assumptions on the connectivity structure that are required to explain the
observed functional connectivity. In particular we seek a decomposition of the
Markov structure into segregated functional networks using decomposable graphs:
a set of strongly-connected and partially overlapping cliques. We introduce a
new method to efficiently extract such cliques on a large, strongly-connected
graph. We compare methods learning different graph structures from functional
connectivity by testing the goodness of fit of the model they learn on new
data. We find that summarizing the structure as strongly-connected networks can
give a good description only for very large and overlapping networks. These
results highlight that Markov models are good tools to identify the structure
of brain connectivity from fMRI signals, but for this purpose they must reflect
the small-world properties of the underlying neural systems
Brain covariance selection: better individual functional connectivity models using population prior
Spontaneous brain activity, as observed in functional neuroimaging, has been
shown to display reproducible structure that expresses brain architecture and
carries markers of brain pathologies. An important view of modern neuroscience
is that such large-scale structure of coherent activity reflects modularity
properties of brain connectivity graphs. However, to date, there has been no
demonstration that the limited and noisy data available in spontaneous activity
observations could be used to learn full-brain probabilistic models that
generalize to new data. Learning such models entails two main challenges: i)
modeling full brain connectivity is a difficult estimation problem that faces
the curse of dimensionality and ii) variability between subjects, coupled with
the variability of functional signals between experimental runs, makes the use
of multiple datasets challenging. We describe subject-level brain functional
connectivity structure as a multivariate Gaussian process and introduce a new
strategy to estimate it from group data, by imposing a common structure on the
graphical model in the population. We show that individual models learned from
functional Magnetic Resonance Imaging (fMRI) data using this population prior
generalize better to unseen data than models based on alternative
regularization schemes. To our knowledge, this is the first report of a
cross-validated model of spontaneous brain activity. Finally, we use the
estimated graphical model to explore the large-scale characteristics of
functional architecture and show for the first time that known cognitive
networks appear as the integrated communities of functional connectivity graph.Comment: in Advances in Neural Information Processing Systems, Vancouver :
Canada (2010
Sparse integrative clustering of multiple omics data sets
High resolution microarrays and second-generation sequencing platforms are
powerful tools to investigate genome-wide alterations in DNA copy number,
methylation and gene expression associated with a disease. An integrated
genomic profiling approach measures multiple omics data types simultaneously in
the same set of biological samples. Such approach renders an integrated data
resolution that would not be available with any single data type. In this
study, we use penalized latent variable regression methods for joint modeling
of multiple omics data types to identify common latent variables that can be
used to cluster patient samples into biologically and clinically relevant
disease subtypes. We consider lasso [J. Roy. Statist. Soc. Ser. B 58 (1996)
267-288], elastic net [J. R. Stat. Soc. Ser. B Stat. Methodol. 67 (2005)
301-320] and fused lasso [J. R. Stat. Soc. Ser. B Stat. Methodol. 67 (2005)
91-108] methods to induce sparsity in the coefficient vectors, revealing
important genomic features that have significant contributions to the latent
variables. An iterative ridge regression is used to compute the sparse
coefficient vectors. In model selection, a uniform design [Monographs on
Statistics and Applied Probability (1994) Chapman & Hall] is used to seek
"experimental" points that scattered uniformly across the search domain for
efficient sampling of tuning parameter combinations. We compared our method to
sparse singular value decomposition (SVD) and penalized Gaussian mixture model
(GMM) using both real and simulated data sets. The proposed method is applied
to integrate genomic, epigenomic and transcriptomic data for subtype analysis
in breast and lung cancer data sets.Comment: Published in at http://dx.doi.org/10.1214/12-AOAS578 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Skewed Factor Models Using Selection Mechanisms
Traditional factor models explicitly or implicitly assume that the factors follow a multivariate normal distribution; that is, only moments up to order two are involved. However, it may happen in real data problems that the first two moments cannot explain the factors. Based on this motivation, here we devise three new skewed factor models, the skew-normal, the skew-t, and the generalized skew-normal factor models depending on a selection mechanism on the factors. The ECME algorithms are adopted to estimate related parameters for statistical inference. Monte Carlo simulations validate our new models and we demonstrate the need for skewed factor models using the classic open/closed book exam scores dataset
Statistical M-Estimation and Consistency in Large Deformable Models for Image Warping
The problem of defining appropriate distances between shapes or images and modeling the variability of natural images by group transformations is at the heart of modern image analysis. A current trend is the study of probabilistic and statistical aspects of deformation models, and the development of consistent statistical procedure for the estimation of template images. In this paper, we consider a set of images randomly warped from a mean template which has to be recovered. For this, we define an appropriate statistical parametric model to generate random diffeomorphic deformations in two-dimensions. Then, we focus on the problem of estimating the mean pattern when the images are observed with noise. This problem is challenging both from a theoretical and a practical point of view. M-estimation theory enables us to build an estimator defined as a minimizer of a well-tailored empirical criterion. We prove the convergence of this estimator and propose a gradient descent algorithm to compute this M-estimator in practice. Simulations of template extraction and an application to image clustering and classification are also provided
- …