2,187,856 research outputs found

    Quantized Compressive K-Means

    Full text link
    The recent framework of compressive statistical learning aims at designing tractable learning algorithms that use only a heavily compressed representation-or sketch-of massive datasets. Compressive K-Means (CKM) is such a method: it estimates the centroids of data clusters from pooled, non-linear, random signatures of the learning examples. While this approach significantly reduces computational time on very large datasets, its digital implementation wastes acquisition resources because the learning examples are compressed only after the sensing stage. The present work generalizes the sketching procedure initially defined in Compressive K-Means to a large class of periodic nonlinearities including hardware-friendly implementations that compressively acquire entire datasets. This idea is exemplified in a Quantized Compressive K-Means procedure, a variant of CKM that leverages 1-bit universal quantization (i.e. retaining the least significant bit of a standard uniform quantizer) as the periodic sketch nonlinearity. Trading for this resource-efficient signature (standard in most acquisition schemes) has almost no impact on the clustering performances, as illustrated by numerical experiments

    On Learning Mixtures of Well-Separated Gaussians

    Full text link
    We consider the problem of efficiently learning mixtures of a large number of spherical Gaussians, when the components of the mixture are well separated. In the most basic form of this problem, we are given samples from a uniform mixture of kk standard spherical Gaussians, and the goal is to estimate the means up to accuracy δ\delta using poly(k,d,1/δ)poly(k,d, 1/\delta) samples. In this work, we study the following question: what is the minimum separation needed between the means for solving this task? The best known algorithm due to Vempala and Wang [JCSS 2004] requires a separation of roughly min{k,d}1/4\min\{k,d\}^{1/4}. On the other hand, Moitra and Valiant [FOCS 2010] showed that with separation o(1)o(1), exponentially many samples are required. We address the significant gap between these two bounds, by showing the following results. 1. We show that with separation o(logk)o(\sqrt{\log k}), super-polynomially many samples are required. In fact, this holds even when the kk means of the Gaussians are picked at random in d=O(logk)d=O(\log k) dimensions. 2. We show that with separation Ω(logk)\Omega(\sqrt{\log k}), poly(k,d,1/δ)poly(k,d,1/\delta) samples suffice. Note that the bound on the separation is independent of δ\delta. This result is based on a new and efficient "accuracy boosting" algorithm that takes as input coarse estimates of the true means and in time poly(k,d,1/δ)poly(k,d, 1/\delta) outputs estimates of the means up to arbitrary accuracy δ\delta assuming the separation between the means is Ω(min{logk,d})\Omega(\min\{\sqrt{\log k},\sqrt{d}\}) (independently of δ\delta). We also present a computationally efficient algorithm in d=O(1)d=O(1) dimensions with only Ω(d)\Omega(\sqrt{d}) separation. These results together essentially characterize the optimal order of separation between components that is needed to learn a mixture of kk spherical Gaussians with polynomial samples.Comment: Appeared in FOCS 2017. 55 pages, 1 figur
    corecore