13,288 research outputs found
Compressive Spectral Clustering
Spectral clustering has become a popular technique due to its high
performance in many contexts. It comprises three main steps: create a
similarity graph between N objects to cluster, compute the first k eigenvectors
of its Laplacian matrix to define a feature vector for each object, and run
k-means on these features to separate objects into k classes. Each of these
three steps becomes computationally intensive for large N and/or k. We propose
to speed up the last two steps based on recent results in the emerging field of
graph signal processing: graph filtering of random signals, and random sampling
of bandlimited graph signals. We prove that our method, with a gain in
computation time that can reach several orders of magnitude, is in fact an
approximation of spectral clustering, for which we are able to control the
error. We test the performance of our method on artificial and real-world
network data.Comment: 12 pages, 2 figure
Accelerated Spectral Clustering Using Graph Filtering Of Random Signals
We build upon recent advances in graph signal processing to propose a faster
spectral clustering algorithm. Indeed, classical spectral clustering is based
on the computation of the first k eigenvectors of the similarity matrix'
Laplacian, whose computation cost, even for sparse matrices, becomes
prohibitive for large datasets. We show that we can estimate the spectral
clustering distance matrix without computing these eigenvectors: by graph
filtering random signals. Also, we take advantage of the stochasticity of these
random vectors to estimate the number of clusters k. We compare our method to
classical spectral clustering on synthetic data, and show that it reaches equal
performance while being faster by a factor at least two for large datasets
Consistency of spectral clustering in stochastic block models
We analyze the performance of spectral clustering for community extraction in
stochastic block models. We show that, under mild conditions, spectral
clustering applied to the adjacency matrix of the network can consistently
recover hidden communities even when the order of the maximum expected degree
is as small as , with the number of nodes. This result applies to
some popular polynomial time spectral clustering algorithms and is further
extended to degree corrected stochastic block models using a spherical
-median spectral clustering method. A key component of our analysis is a
combinatorial bound on the spectrum of binary random matrices, which is sharper
than the conventional matrix Bernstein inequality and may be of independent
interest.Comment: Published in at http://dx.doi.org/10.1214/14-AOS1274 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
- âŠ