1 research outputs found
Data-Driven Clustering via Parameterized Lloyd's Families
Algorithms for clustering points in metric spaces is a long-studied area of
research. Clustering has seen a multitude of work both theoretically, in
understanding the approximation guarantees possible for many objective
functions such as k-median and k-means clustering, and experimentally, in
finding the fastest algorithms and seeding procedures for Lloyd's algorithm.
The performance of a given clustering algorithm depends on the specific
application at hand, and this may not be known up front. For example, a
"typical instance" may vary depending on the application, and different
clustering heuristics perform differently depending on the instance.
In this paper, we define an infinite family of algorithms generalizing
Lloyd's algorithm, with one parameter controlling the initialization procedure,
and another parameter controlling the local search procedure. This family of
algorithms includes the celebrated k-means++ algorithm, as well as the classic
farthest-first traversal algorithm. We design efficient learning algorithms
which receive samples from an application-specific distribution over clustering
instances and learn a near-optimal clustering algorithm from the class. We show
the best parameters vary significantly across datasets such as MNIST, CIFAR,
and mixtures of Gaussians. Our learned algorithms never perform worse than
k-means++, and on some datasets we see significant improvements