74 research outputs found
Learning mixtures of separated nonspherical Gaussians
Mixtures of Gaussian (or normal) distributions arise in a variety of
application areas. Many heuristics have been proposed for the task of finding
the component Gaussians given samples from the mixture, such as the EM
algorithm, a local-search heuristic from Dempster, Laird and Rubin [J. Roy.
Statist. Soc. Ser. B 39 (1977) 1-38]. These do not provably run in polynomial
time. We present the first algorithm that provably learns the component
Gaussians in time that is polynomial in the dimension. The Gaussians may have
arbitrary shape, but they must satisfy a ``separation condition'' which places
a lower bound on the distance between the centers of any two component
Gaussians. The mathematical results at the heart of our proof are ``distance
concentration'' results--proved using isoperimetric inequalities--which
establish bounds on the probability distribution of the distance between a pair
of points generated according to the mixture. We also formalize the more
general problem of max-likelihood fit of a Gaussian mixture to unstructured
data.Comment: Published at http://dx.doi.org/10.1214/105051604000000512 in the
Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute
of Mathematical Statistics (http://www.imstat.org
A Probabilistic Analysis of EM for Mixtures of Separated, Spherical Gaussians
We show that, given data from a mixture of k well-separated spherical Gaussians in β^d, a simple two-round variant of EM will, with high probability, learn the parameters of the Gaussians to near-optimal precision, if the dimension is high (d >> ln k). We relate this to previous theoretical and empirical work on the EM algorithm
Training Gaussian Mixture Models at Scale via Coresets
How can we train a statistical mixture model on a massive data set? In this
work we show how to construct coresets for mixtures of Gaussians. A coreset is
a weighted subset of the data, which guarantees that models fitting the coreset
also provide a good fit for the original data set. We show that, perhaps
surprisingly, Gaussian mixtures admit coresets of size polynomial in dimension
and the number of mixture components, while being independent of the data set
size. Hence, one can harness computationally intensive algorithms to compute a
good approximation on a significantly smaller data set. More importantly, such
coresets can be efficiently constructed both in distributed and streaming
settings and do not impose restrictions on the data generating process. Our
results rely on a novel reduction of statistical estimation to problems in
computational geometry and new combinatorial complexity results for mixtures of
Gaussians. Empirical evaluation on several real-world datasets suggests that
our coreset-based approach enables significant reduction in training-time with
negligible approximation error
Learning Mixtures of Distributions over Large Discrete Domains
We discuss recent results giving algorithms for learning mixtures of unstructured distributions
Learning Arbitrary Statistical Mixtures of Discrete Distributions
We study the problem of learning from unlabeled samples very general
statistical mixture models on large finite sets. Specifically, the model to be
learned, , is a probability distribution over probability
distributions , where each such is a probability distribution over . When we sample from , we do not observe
directly, but only indirectly and in very noisy fashion, by sampling from
repeatedly, independently times from the distribution . The problem is
to infer to high accuracy in transportation (earthmover) distance.
We give the first efficient algorithms for learning this mixture model
without making any restricting assumptions on the structure of the distribution
. We bound the quality of the solution as a function of the size of
the samples and the number of samples used. Our model and results have
applications to a variety of unsupervised learning scenarios, including
learning topic models and collaborative filtering.Comment: 23 pages. Preliminary version in the Proceeding of the 47th ACM
Symposium on the Theory of Computing (STOC15
- β¦