1 research outputs found

    Coresets for Gaussian Mixture Models of Any Shape

    Full text link
    An ε\varepsilon-coreset for a given set DD of nn points, is usually a small weighted set, such that querying the coreset \emph{provably} yields a (1+ε)(1+\varepsilon)-factor approximation to the original (full) dataset, for a given family of queries. Using existing techniques, coresets can be maintained for streaming, dynamic (insertion/deletions), and distributed data in parallel, e.g. on a network, GPU or cloud. We suggest the first coresets that approximate the negative log-likelihood for kk-Gaussians Mixture Models (GMM) of arbitrary shapes (ratio between eigenvalues of their covariance matrices). For example, for any input set DD whose coordinates are integers in [n100,n100][-n^{100},n^{100}] and any fixed k,d1k,d\geq 1, the coreset size is (logn)O(1)/ε2(\log n)^{O(1)}/\varepsilon^2, and can be computed in time near-linear in nn, with high probability. The optimal GMM may then be approximated quickly by learning the small coreset. Previous results [NIPS'11, JMLR'18] suggested such small coresets for the case of semi-speherical unit Gaussians, i.e., where their corresponding eigenvalues are constants between 12π\frac{1}{2\pi} to 2π2\pi. Our main technique is a reduction between coresets for kk-GMMs and projective clustering problems. We implemented our algorithms, and provide open code, and experimental results. Since our coresets are generic, with no special dependency on GMMs, we hope that they will be useful for many other functions
    corecore