89,854 research outputs found

    Optimal Clustering under Uncertainty

    Full text link
    Classical clustering algorithms typically either lack an underlying probability framework to make them predictive or focus on parameter estimation rather than defining and minimizing a notion of error. Recent work addresses these issues by developing a probabilistic framework based on the theory of random labeled point processes and characterizing a Bayes clusterer that minimizes the number of misclustered points. The Bayes clusterer is analogous to the Bayes classifier. Whereas determining a Bayes classifier requires full knowledge of the feature-label distribution, deriving a Bayes clusterer requires full knowledge of the point process. When uncertain of the point process, one would like to find a robust clusterer that is optimal over the uncertainty, just as one may find optimal robust classifiers with uncertain feature-label distributions. Herein, we derive an optimal robust clusterer by first finding an effective random point process that incorporates all randomness within its own probabilistic structure and from which a Bayes clusterer can be derived that provides an optimal robust clusterer relative to the uncertainty. This is analogous to the use of effective class-conditional distributions in robust classification. After evaluating the performance of robust clusterers in synthetic mixtures of Gaussians models, we apply the framework to granular imaging, where we make use of the asymptotic granulometric moment theory for granular images to relate robust clustering theory to the application.Comment: 19 pages, 5 eps figures, 1 tabl

    Semantic distillation: a method for clustering objects by their contextual specificity

    Full text link
    Techniques for data-mining, latent semantic analysis, contextual search of databases, etc. have long ago been developed by computer scientists working on information retrieval (IR). Experimental scientists, from all disciplines, having to analyse large collections of raw experimental data (astronomical, physical, biological, etc.) have developed powerful methods for their statistical analysis and for clustering, categorising, and classifying objects. Finally, physicists have developed a theory of quantum measurement, unifying the logical, algebraic, and probabilistic aspects of queries into a single formalism. The purpose of this paper is twofold: first to show that when formulated at an abstract level, problems from IR, from statistical data analysis, and from physical measurement theories are very similar and hence can profitably be cross-fertilised, and, secondly, to propose a novel method of fuzzy hierarchical clustering, termed \textit{semantic distillation} -- strongly inspired from the theory of quantum measurement --, we developed to analyse raw data coming from various types of experiments on DNA arrays. We illustrate the method by analysing DNA arrays experiments and clustering the genes of the array according to their specificity.Comment: Accepted for publication in Studies in Computational Intelligence, Springer-Verla

    Noisy Subspace Clustering via Thresholding

    Full text link
    We consider the problem of clustering noisy high-dimensional data points into a union of low-dimensional subspaces and a set of outliers. The number of subspaces, their dimensions, and their orientations are unknown. A probabilistic performance analysis of the thresholding-based subspace clustering (TSC) algorithm introduced recently in [1] shows that TSC succeeds in the noisy case, even when the subspaces intersect. Our results reveal an explicit tradeoff between the allowed noise level and the affinity of the subspaces. We furthermore find that the simple outlier detection scheme introduced in [1] provably succeeds in the noisy case.Comment: Presented at the IEEE Int. Symp. Inf. Theory (ISIT) 2013, Istanbul, Turkey. The version posted here corrects a minor error in the published version. Specifically, the exponent -c n_l in the success probability of Theorem 1 and in the corresponding proof outline has been corrected to -c(n_l-1

    Self-organized model of cascade spreading

    Full text link
    We study simultaneous price drops of real stocks and show that for high drop thresholds they follow a power-law distribution. To reproduce these collective downturns, we propose a minimal self-organized model of cascade spreading based on a probabilistic response of the system elements to stress conditions. This model is solvable using the theory of branching processes and the mean-field approximation. For a wide range of parameters, the system is in a critical state and displays a power-law cascade-size distribution similar to the empirically observed one. We further generalize the model to reproduce volatility clustering and other observed properties of real stocks.Comment: 8 pages, 6 figure

    Joint ranking and clustering based on Markov Chain transition probabilities learned from data

    Get PDF
    The focus of this thesis is to develop a Markov Chain based framework for joint ranking and clustering of a dataset without the need for critical user-defined hyper-parameters. Joint ranking and clustering may be useful in several respects, and may give additional insight for the data analyst, as opposed to the traditional separate ranking and clustering procedures. By coupling Markov chain theory with recent advances in kernel methods using the so-called probabilistic cluster kernel, we are able to learn the transition probabilities from the inherent structures in the data in a near parameter-free approach. The theory developed in this thesis is applied to several real world datasets of different types with promising results
    • …
    corecore