67,749 research outputs found
Consistency of Lloyd's Algorithm Under Perturbations
In the context of unsupervised learning, Lloyd's algorithm is one of the most
widely used clustering algorithms. It has inspired a plethora of work
investigating the correctness of the algorithm under various settings with
ground truth clusters. In particular, in 2016, Lu and Zhou have shown that the
mis-clustering rate of Lloyd's algorithm on independent samples from a
sub-Gaussian mixture is exponentially bounded after iterations,
assuming proper initialization of the algorithm. However, in many applications,
the true samples are unobserved and need to be learned from the data via
pre-processing pipelines such as spectral methods on appropriate data matrices.
We show that the mis-clustering rate of Lloyd's algorithm on perturbed samples
from a sub-Gaussian mixture is also exponentially bounded after
iterations under the assumptions of proper initialization and that the
perturbation is small relative to the sub-Gaussian noise. In canonical settings
with ground truth clusters, we derive bounds for algorithms such as
-means to find good initializations and thus leading to the correctness
of clustering via the main result. We show the implications of the results for
pipelines measuring the statistical significance of derived clusters from data
such as SigClust. We use these general results to derive implications in
providing theoretical guarantees on the misclustering rate for Lloyd's
algorithm in a host of applications, including high-dimensional time series,
multi-dimensional scaling, and community detection for sparse networks via
spectral clustering.Comment: Preprint version
Detecting Communities under Differential Privacy
Complex networks usually expose community structure with groups of nodes
sharing many links with the other nodes in the same group and relatively few
with the nodes of the rest. This feature captures valuable information about
the organization and even the evolution of the network. Over the last decade, a
great number of algorithms for community detection have been proposed to deal
with the increasingly complex networks. However, the problem of doing this in a
private manner is rarely considered. In this paper, we solve this problem under
differential privacy, a prominent privacy concept for releasing private data.
We analyze the major challenges behind the problem and propose several schemes
to tackle them from two perspectives: input perturbation and algorithm
perturbation. We choose Louvain method as the back-end community detection for
input perturbation schemes and propose the method LouvainDP which runs Louvain
algorithm on a noisy super-graph. For algorithm perturbation, we design
ModDivisive using exponential mechanism with the modularity as the score. We
have thoroughly evaluated our techniques on real graphs of different sizes and
verified their outperformance over the state-of-the-art
- …