175,487 research outputs found

    Nonsmooth optimization models and algorithms for data clustering and visualization

    Get PDF
    Cluster analysis deals with the problem of organization of a collection of patterns into clusters based on a similarity measure. Various distance functions can be used to define this measure. Clustering problems with the similarity measure defined by the squared Euclidean distance have been studied extensively over the last five decades. However, problems with other Minkowski norms have attracted significantly less attention. The use of different similarity measures may help to identify different cluster structures of a data set. This in turn may help to significantly improve the decision making process. High dimensional data visualization is another important task in the field of data mining and pattern recognition. To date, the principal component analysis and the self-organizing maps techniques have been used to solve such problems. In this thesis we develop algorithms for solving clustering problems in large data sets using various similarity measures. Such similarity measures are based on the squared LDoctor of Philosoph

    Clustering by soft-constraint affinity propagation: Applications to gene-expression data

    Full text link
    Motivation: Similarity-measure based clustering is a crucial problem appearing throughout scientific data analysis. Recently, a powerful new algorithm called Affinity Propagation (AP) based on message-passing techniques was proposed by Frey and Dueck \cite{Frey07}. In AP, each cluster is identified by a common exemplar all other data points of the same cluster refer to, and exemplars have to refer to themselves. Albeit its proved power, AP in its present form suffers from a number of drawbacks. The hard constraint of having exactly one exemplar per cluster restricts AP to classes of regularly shaped clusters, and leads to suboptimal performance, {\it e.g.}, in analyzing gene expression data. Results: This limitation can be overcome by relaxing the AP hard constraints. A new parameter controls the importance of the constraints compared to the aim of maximizing the overall similarity, and allows to interpolate between the simple case where each data point selects its closest neighbor as an exemplar and the original AP. The resulting soft-constraint affinity propagation (SCAP) becomes more informative, accurate and leads to more stable clustering. Even though a new {\it a priori} free-parameter is introduced, the overall dependence of the algorithm on external tuning is reduced, as robustness is increased and an optimal strategy for parameter selection emerges more naturally. SCAP is tested on biological benchmark data, including in particular microarray data related to various cancer types. We show that the algorithm efficiently unveils the hierarchical cluster structure present in the data sets. Further on, it allows to extract sparse gene expression signatures for each cluster.Comment: 11 pages, supplementary material: http://isiosf.isi.it/~weigt/scap_supplement.pd

    A Physiologically Based System Theory of Consciousness

    Get PDF
    A system which uses large numbers of devices to perform a complex functionality is forced to adopt a simple functional architecture by the needs to construct copies of, repair, and modify the system. A simple functional architecture means that functionality is partitioned into relatively equal sized components on many levels of detail down to device level, a mapping exists between the different levels, and exchange of information between components is minimized. In the instruction architecture functionality is partitioned on every level into instructions, which exchange unambiguous system information and therefore output system commands. The von Neumann architecture is a special case of the instruction architecture in which instructions are coded as unambiguous system information. In the recommendation (or pattern extraction) architecture functionality is partitioned on every level into repetition elements, which can freely exchange ambiguous information and therefore output only system action recommendations which must compete for control of system behavior. Partitioning is optimized to the best tradeoff between even partitioning and minimum cost of distributing data. Natural pressures deriving from the need to construct copies under DNA control, recover from errors, failures and damage, and add new functionality derived from random mutations has resulted in biological brains being constrained to adopt the recommendation architecture. The resultant hierarchy of functional separations can be the basis for understanding psychological phenomena in terms of physiology. A theory of consciousness is described based on the recommendation architecture model for biological brains. Consciousness is defined at a high level in terms of sensory independent image sequences including self images with the role of extending the search of records of individual experience for behavioral guidance in complex social situations. Functional components of this definition of consciousness are developed, and it is demonstrated that these components can be translated through subcomponents to descriptions in terms of known and postulated physiological mechanisms

    Soft clustering analysis of galaxy morphologies: A worked example with SDSS

    Full text link
    Context: The huge and still rapidly growing amount of galaxies in modern sky surveys raises the need of an automated and objective classification method. Unsupervised learning algorithms are of particular interest, since they discover classes automatically. Aims: We briefly discuss the pitfalls of oversimplified classification methods and outline an alternative approach called "clustering analysis". Methods: We categorise different classification methods according to their capabilities. Based on this categorisation, we present a probabilistic classification algorithm that automatically detects the optimal classes preferred by the data. We explore the reliability of this algorithm in systematic tests. Using a small sample of bright galaxies from the SDSS, we demonstrate the performance of this algorithm in practice. We are able to disentangle the problems of classification and parametrisation of galaxy morphologies in this case. Results: We give physical arguments that a probabilistic classification scheme is necessary. The algorithm we present produces reasonable morphological classes and object-to-class assignments without any prior assumptions. Conclusions: There are sophisticated automated classification algorithms that meet all necessary requirements, but a lot of work is still needed on the interpretation of the results.Comment: 18 pages, 19 figures, 2 tables, submitted to A

    Comparison of chemical clustering methods using graph- and fingerprint-based similarity measures

    Get PDF
    This paper compares several published methods for clustering chemical structures, using both graph- and fingerprint-based similarity measures. The clusterings from each method were compared to determine the degree of cluster overlap. Each method was also evaluated on how well it grouped structures into clusters possessing a non-trivial substructural commonality. The methods which employ adjustable parameters were tested to determine the stability of each parameter for datasets of varying size and composition. Our experiments suggest that both graph- and fingerprint-based similarity measures can be used effectively for generating chemical clusterings; it is also suggested that the CAST and Yin–Chen methods, suggested recently for the clustering of gene expression patterns, may also prove effective for the clustering of 2D chemical structures

    Exploiting citation networks for large-scale author name disambiguation

    Get PDF
    We present a novel algorithm and validation method for disambiguating author names in very large bibliographic data sets and apply it to the full Web of Science (WoS) citation index. Our algorithm relies only upon the author and citation graphs available for the whole period covered by the WoS. A pair-wise publication similarity metric, which is based on common co-authors, self-citations, shared references and citations, is established to perform a two-step agglomerative clustering that first connects individual papers and then merges similar clusters. This parameterized model is optimized using an h-index based recall measure, favoring the correct assignment of well-cited publications, and a name-initials-based precision using WoS metadata and cross-referenced Google Scholar profiles. Despite the use of limited metadata, we reach a recall of 87% and a precision of 88% with a preference for researchers with high h-index values. 47 million articles of WoS can be disambiguated on a single machine in less than a day. We develop an h-index distribution model, confirming that the prediction is in excellent agreement with the empirical data, and yielding insight into the utility of the h-index in real academic ranking scenarios.Comment: 14 pages, 5 figure

    A Functional Architecture Approach to Neural Systems

    Get PDF
    The technology for the design of systems to perform extremely complex combinations of real-time functionality has developed over a long period. This technology is based on the use of a hardware architecture with a physical separation into memory and processing, and a software architecture which divides functionality into a disciplined hierarchy of software components which exchange unambiguous information. This technology experiences difficulty in design of systems to perform parallel processing, and extreme difficulty in design of systems which can heuristically change their own functionality. These limitations derive from the approach to information exchange between functional components. A design approach in which functional components can exchange ambiguous information leads to systems with the recommendation architecture which are less subject to these limitations. Biological brains have been constrained by natural pressures to adopt functional architectures with this different information exchange approach. Neural networks have not made a complete shift to use of ambiguous information, and do not address adequate management of context for ambiguous information exchange between modules. As a result such networks cannot be scaled to complex functionality. Simulations of systems with the recommendation architecture demonstrate the capability to heuristically organize to perform complex functionality

    Effects of Galaxy Formation on Thermodynamics of the Intracluster Medium

    Get PDF
    We present detailed comparisons of the intracluster medium (ICM) in cosmological Eulerian cluster simulations with deep Chandra observations of nearby relaxed clusters. To assess the impact of galaxy formation, we compare two sets of simulations, one performed in the non-radiative regime and another with radiative cooling and several physical processes critical to various aspects of galaxy formation: star formation, metal enrichment and stellar feedback. We show that the observed ICM properties outside cluster cores are well-reproduced in the simulations that include cooling and star formation, while the non-radiative simulations predict an overall shape of the ICM profiles inconsistent with observations. In particular, we find that the ICM entropy in our runs with cooling is enhanced to the observed levels at radii as large as half of the virial radius. We also find that outside cluster cores entropy scaling with the mean ICM temperature in both simulations and Chandra observations is consistent with being self-similar within current error bars. We find that the pressure profiles of simulated clusters are also close to self-similar and exhibit little cluster-to-cluster scatter. The X-ray observable-total mass relations for our simulated sample agree with the Chandra measurements to \~10%-20% in normalization. We show that this systematic difference could be caused by the subsonic gas motions, unaccounted for in X-ray hydrostatic mass estimates. The much improved agreement of simulations and observations in the ICM profiles and scaling relations is encouraging and the existence of tight relations of X-ray observables, such as Yx, and total cluster mass and the simple redshift evolution of these relations hold promise for the use of clusters as cosmological probes.Comment: 14 pages, 6 figures. Matches version accepted to Ap
    • …
    corecore