89,939 research outputs found
Anytime Hierarchical Clustering
We propose a new anytime hierarchical clustering method that iteratively
transforms an arbitrary initial hierarchy on the configuration of measurements
along a sequence of trees we prove for a fixed data set must terminate in a
chain of nested partitions that satisfies a natural homogeneity requirement.
Each recursive step re-edits the tree so as to improve a local measure of
cluster homogeneity that is compatible with a number of commonly used (e.g.,
single, average, complete) linkage functions. As an alternative to the standard
batch algorithms, we present numerical evidence to suggest that appropriate
adaptations of this method can yield decentralized, scalable algorithms
suitable for distributed/parallel computation of clustering hierarchies and
online tracking of clustering trees applicable to large, dynamically changing
databases and anomaly detection.Comment: 13 pages, 6 figures, 5 tables, in preparation for submission to a
conferenc
Belief Hierarchical Clustering
In the data mining field many clustering methods have been proposed, yet
standard versions do not take into account uncertain databases. This paper
deals with a new approach to cluster uncertain data by using a hierarchical
clustering defined within the belief function framework. The main objective of
the belief hierarchical clustering is to allow an object to belong to one or
several clusters. To each belonging, a degree of belief is associated, and
clusters are combined based on the pignistic properties. Experiments with real
uncertain data show that our proposed method can be considered as a propitious
tool
Isotropic Dynamic Hierarchical Clustering
We face a need of discovering a pattern in locations of a great number of
points in a high-dimensional space. Goal is to group the close points together.
We are interested in a hierarchical structure, like a B-tree. B-Trees are
hierarchical, balanced, and they can be constructed dynamically. B-Tree
approach allows to determine the structure without any supervised learning or a
priori knowlwdge. The space is Euclidean and isotropic. Unfortunately, there
are no B-Tree implementations processing indices in a symmetrical and
isotropical way. Some implementations are based on constructing compound
asymmetrical indices from point coordinates; and the others split the nodes
along the coordinate hyper-planes. We need to process tens of millions of
points in a thousand-dimensional space. The application has to be scalable.
Ideally, a cluster should be an ellipsoid, but it would require to store O(n2)
ellipse axes. So, we are using multi-dimensional balls defined by the centers
and radii. Calculation of statistical values like the mean and the average
deviation, can be done in an incremental way. While adding a point to a tree,
the statistical values for nodes recalculated in O(1) time. We support both,
brute force O(2n) and greedy O(n2) split algorithms. Statistical and aggregated
node information also allows to manipulate (to search, to delete) aggregated
sets of closely located points. Hierarchical information retrieval. When
searching, the user is provided with the highest appropriate nodes in the tree
hierarchy, with the most important clusters emerging in the hierarchy
automatically. Then, if interested, the user may navigate down the tree to more
specific points. The system is implemented as a library of Java classes
representing Points, Sets of points with aggregated statistical information,
B-tree, and Nodes with a support of serialization and storage in a MySQL
database.Comment: 6 pages with 3 example
Bias and Hierarchical Clustering
It is now well established that galaxies are biased tracers of the
distribution of matter, although it is still not known what form this bias
takes. In local bias models the propensity for a galaxy to form at a point
depends only on the overall density of matter at that point. Hierarchical
scaling arguments allow one to build a fully-specified model of the underlying
distribution of matter and to explore the effects of local bias in the regime
of strong clustering. Using a generating-function method developed by
Bernardeau & Schaeffer (1992), we show that hierarchical models lead one
directly to the conclusion that a local bias does not alter the shape of the
galaxy correlation function relative to the matter correlation function on
large scales. This provides an elegant extension of a result first obtained by
Coles (1993) for Gaussian underlying fields and confirms the conclusions of
Scherrer & Weinberg (1998) obtained using a different approach. We also argue
that particularly dense regions in a hierarchical density field display a form
of bias that is different from that obtained by selecting such peaks in
Gaussian fields: they are themselves hierarchically distributed with scaling
parameters . This kind of bias is also factorizable, thus in
principle furnishing a simple test of this class of models.Comment: Latex, accepted for publication in ApJL; moderate revision
Methods of Hierarchical Clustering
We survey agglomerative hierarchical clustering algorithms and discuss
efficient implementations that are available in R and other software
environments. We look at hierarchical self-organizing maps, and mixture models.
We review grid-based clustering, focusing on hierarchical density-based
approaches. Finally we describe a recently developed very efficient (linear
time) hierarchical clustering algorithm, which can also be viewed as a
hierarchical grid-based algorithm.Comment: 21 pages, 2 figures, 1 table, 69 reference
Hierarchical growing cell structures: TreeGCS
We propose a hierarchical clustering algorithm (TreeGCS) based upon the Growing Cell Structure (GCS) neural network of Fritzke. Our algorithm refines and builds upon the GCS base, overcoming an inconsistency in the original GCS algorithm, where the network topology is susceptible to the ordering of the input vectors. Our algorithm is unsupervised, flexible, and dynamic and we have imposed no additional parameters on the underlying GCS algorithm. Our ultimate aim is a hierarchical clustering neural network that is both consistent and stable and identifies the innate hierarchical structure present in vector-based data. We demonstrate improved stability of the GCS foundation and evaluate our algorithm against the hierarchy generated by an ascendant hierarchical clustering dendogram. Our approach emulates the hierarchical clustering of the dendogram. It demonstrates the importance of the parameter settings for GCS and how they affect the stability of the clustering
HIERARCHICAL CLUSTERING USING LEVEL SETS
Over the past several decades, clustering algorithms have earned their place as a go-to solution for database mining. This paper introduces a new concept which is used to develop a new recursive version of DBSCAN that can successfully perform hierarchical clustering, called Level- Set Clustering (LSC). A level-set is a subset of points of a data-set whose densities are greater than some threshold, ‘t’. By graphing the size of each level-set against its respective ‘t,’ indents are produced in the line graph which correspond to clusters in the data-set, as the points in a cluster have very similar densities. This new algorithm is able to produce the clustering result with the same O(n log n) time complexity as DBSCAN and OPTICS, while catching clusters the others missed
- …