29,851 research outputs found
A new hierarchical clustering algorithm to identify non-overlapping like-minded communities
A network has a non-overlapping community structure if the nodes of the
network can be partitioned into disjoint sets such that each node in a set is
densely connected to other nodes inside the set and sparsely connected to the
nodes out- side it. There are many metrics to validate the efficacy of such a
structure, such as clustering coefficient, betweenness, centrality, modularity
and like-mindedness. Many methods have been proposed to optimize some of these
metrics, but none of these works well on the recently introduced metric
like-mindedness. To solve this problem, we propose a be- havioral property
based algorithm to identify communities that optimize the like-mindedness
metric and compare its performance on this metric with other behavioral data
based methodologies as well as community detection methods that rely only on
structural data. We execute these algorithms on real-life datasets of
Filmtipset and Twitter and show that our algorithm performs better than the
existing algorithms with respect to the like-mindedness metric
Fast Approximate -Means via Cluster Closures
-means, a simple and effective clustering algorithm, is one of the most
widely used algorithms in multimedia and computer vision community. Traditional
-means is an iterative algorithm---in each iteration new cluster centers are
computed and each data point is re-assigned to its nearest center. The cluster
re-assignment step becomes prohibitively expensive when the number of data
points and cluster centers are large.
In this paper, we propose a novel approximate -means algorithm to greatly
reduce the computational complexity in the assignment step. Our approach is
motivated by the observation that most active points changing their cluster
assignments at each iteration are located on or near cluster boundaries. The
idea is to efficiently identify those active points by pre-assembling the data
into groups of neighboring points using multiple random spatial partition
trees, and to use the neighborhood information to construct a closure for each
cluster, in such a way only a small number of cluster candidates need to be
considered when assigning a data point to its nearest cluster. Using complexity
analysis, image data clustering, and applications to image retrieval, we show
that our approach out-performs state-of-the-art approximate -means
algorithms in terms of clustering quality and efficiency
- …