99,963 research outputs found
Soft clustering analysis of galaxy morphologies: A worked example with SDSS
Context: The huge and still rapidly growing amount of galaxies in modern sky
surveys raises the need of an automated and objective classification method.
Unsupervised learning algorithms are of particular interest, since they
discover classes automatically. Aims: We briefly discuss the pitfalls of
oversimplified classification methods and outline an alternative approach
called "clustering analysis". Methods: We categorise different classification
methods according to their capabilities. Based on this categorisation, we
present a probabilistic classification algorithm that automatically detects the
optimal classes preferred by the data. We explore the reliability of this
algorithm in systematic tests. Using a small sample of bright galaxies from the
SDSS, we demonstrate the performance of this algorithm in practice. We are able
to disentangle the problems of classification and parametrisation of galaxy
morphologies in this case. Results: We give physical arguments that a
probabilistic classification scheme is necessary. The algorithm we present
produces reasonable morphological classes and object-to-class assignments
without any prior assumptions. Conclusions: There are sophisticated automated
classification algorithms that meet all necessary requirements, but a lot of
work is still needed on the interpretation of the results.Comment: 18 pages, 19 figures, 2 tables, submitted to A
Laplacian Mixture Modeling for Network Analysis and Unsupervised Learning on Graphs
Laplacian mixture models identify overlapping regions of influence in
unlabeled graph and network data in a scalable and computationally efficient
way, yielding useful low-dimensional representations. By combining Laplacian
eigenspace and finite mixture modeling methods, they provide probabilistic or
fuzzy dimensionality reductions or domain decompositions for a variety of input
data types, including mixture distributions, feature vectors, and graphs or
networks. Provable optimal recovery using the algorithm is analytically shown
for a nontrivial class of cluster graphs. Heuristic approximations for scalable
high-performance implementations are described and empirically tested.
Connections to PageRank and community detection in network analysis demonstrate
the wide applicability of this approach. The origins of fuzzy spectral methods,
beginning with generalized heat or diffusion equations in physics, are reviewed
and summarized. Comparisons to other dimensionality reduction and clustering
methods for challenging unsupervised machine learning problems are also
discussed.Comment: 13 figures, 35 reference
First-Order Decomposition Trees
Lifting attempts to speed up probabilistic inference by exploiting symmetries
in the model. Exact lifted inference methods, like their propositional
counterparts, work by recursively decomposing the model and the problem. In the
propositional case, there exist formal structures, such as decomposition trees
(dtrees), that represent such a decomposition and allow us to determine the
complexity of inference a priori. However, there is currently no equivalent
structure nor analogous complexity results for lifted inference. In this paper,
we introduce FO-dtrees, which upgrade propositional dtrees to the first-order
level. We show how these trees can characterize a lifted inference solution for
a probabilistic logical model (in terms of a sequence of lifted operations),
and make a theoretical analysis of the complexity of lifted inference in terms
of the novel notion of lifted width for the tree
The Parallel Complexity of Growth Models
This paper investigates the parallel complexity of several non-equilibrium
growth models. Invasion percolation, Eden growth, ballistic deposition and
solid-on-solid growth are all seemingly highly sequential processes that yield
self-similar or self-affine random clusters. Nonetheless, we present fast
parallel randomized algorithms for generating these clusters. The running times
of the algorithms scale as , where is the system size, and the
number of processors required scale as a polynomial in . The algorithms are
based on fast parallel procedures for finding minimum weight paths; they
illuminate the close connection between growth models and self-avoiding paths
in random environments. In addition to their potential practical value, our
algorithms serve to classify these growth models as less complex than other
growth models, such as diffusion-limited aggregation, for which fast parallel
algorithms probably do not exist.Comment: 20 pages, latex, submitted to J. Stat. Phys., UNH-TR94-0
- …