37 research outputs found
Efficient Density Estimation via Piecewise Polynomial Approximation
We give a highly efficient "semi-agnostic" algorithm for learning univariate
probability distributions that are well approximated by piecewise polynomial
density functions. Let be an arbitrary distribution over an interval
which is -close (in total variation distance) to an unknown probability
distribution that is defined by an unknown partition of into
intervals and unknown degree- polynomials specifying over each of
the intervals. We give an algorithm that draws \tilde{O}(t\new{(d+1)}/\eps^2)
samples from , runs in time \poly(t,d,1/\eps), and with high probability
outputs a piecewise polynomial hypothesis distribution that is
(O(\tau)+\eps)-close (in total variation distance) to . This sample
complexity is essentially optimal; we show that even for , any
algorithm that learns an unknown -piecewise degree- probability
distribution over to accuracy \eps must use \Omega({\frac {t(d+1)}
{\poly(1 + \log(d+1))}} \cdot {\frac 1 {\eps^2}}) samples from the
distribution, regardless of its running time. Our algorithm combines tools from
approximation theory, uniform convergence, linear programming, and dynamic
programming.
We apply this general algorithm to obtain a wide range of results for many
natural problems in density estimation over both continuous and discrete
domains. These include state-of-the-art results for learning mixtures of
log-concave distributions; mixtures of -modal distributions; mixtures of
Monotone Hazard Rate distributions; mixtures of Poisson Binomial Distributions;
mixtures of Gaussians; and mixtures of -monotone densities. Our general
technique yields computationally efficient algorithms for all these problems,
in many cases with provably optimal sample complexities (up to logarithmic
factors) in all parameters
Sample-Efficient Learning of Mixtures
We consider PAC learning of probability distributions (a.k.a. density
estimation), where we are given an i.i.d. sample generated from an unknown
target distribution, and want to output a distribution that is close to the
target in total variation distance. Let be an arbitrary class of
probability distributions, and let denote the class of
-mixtures of elements of . Assuming the existence of a method
for learning with sample complexity ,
we provide a method for learning with sample complexity
. Our mixture
learning algorithm has the property that, if the -learner is
proper/agnostic, then the -learner would be proper/agnostic as
well.
This general result enables us to improve the best known sample complexity
upper bounds for a variety of important mixture classes. First, we show that
the class of mixtures of axis-aligned Gaussians in is
PAC-learnable in the agnostic setting with
samples, which is tight in and up to logarithmic factors. Second, we
show that the class of mixtures of Gaussians in is
PAC-learnable in the agnostic setting with sample complexity
, which improves the previous known
bounds of and
in its dependence on and . Finally,
we show that the class of mixtures of log-concave distributions over
is PAC-learnable using
samples.Comment: A bug from the previous version, which appeared in AAAI 2018
proceedings, is fixed. 18 page