24 research outputs found
Sample-Efficient Learning of Mixtures
We consider PAC learning of probability distributions (a.k.a. density
estimation), where we are given an i.i.d. sample generated from an unknown
target distribution, and want to output a distribution that is close to the
target in total variation distance. Let be an arbitrary class of
probability distributions, and let denote the class of
-mixtures of elements of . Assuming the existence of a method
for learning with sample complexity ,
we provide a method for learning with sample complexity
. Our mixture
learning algorithm has the property that, if the -learner is
proper/agnostic, then the -learner would be proper/agnostic as
well.
This general result enables us to improve the best known sample complexity
upper bounds for a variety of important mixture classes. First, we show that
the class of mixtures of axis-aligned Gaussians in is
PAC-learnable in the agnostic setting with
samples, which is tight in and up to logarithmic factors. Second, we
show that the class of mixtures of Gaussians in is
PAC-learnable in the agnostic setting with sample complexity
, which improves the previous known
bounds of and
in its dependence on and . Finally,
we show that the class of mixtures of log-concave distributions over
is PAC-learnable using
samples.Comment: A bug from the previous version, which appeared in AAAI 2018
proceedings, is fixed. 18 page
Hashing-Based-Estimators for Kernel Density in High Dimensions
Given a set of points and a kernel , the Kernel
Density Estimate at a point is defined as
. We study the problem
of designing a data structure that given a data set and a kernel function,
returns *approximations to the kernel density* of a query point in *sublinear
time*. We introduce a class of unbiased estimators for kernel density
implemented through locality-sensitive hashing, and give general theorems
bounding the variance of such estimators. These estimators give rise to
efficient data structures for estimating the kernel density in high dimensions
for a variety of commonly used kernels. Our work is the first to provide
data-structures with theoretical guarantees that improve upon simple random
sampling in high dimensions.Comment: A preliminary version of this paper appeared in FOCS 201