6,392 research outputs found
Sample-Efficient Learning of Mixtures
We consider PAC learning of probability distributions (a.k.a. density
estimation), where we are given an i.i.d. sample generated from an unknown
target distribution, and want to output a distribution that is close to the
target in total variation distance. Let be an arbitrary class of
probability distributions, and let denote the class of
-mixtures of elements of . Assuming the existence of a method
for learning with sample complexity ,
we provide a method for learning with sample complexity
. Our mixture
learning algorithm has the property that, if the -learner is
proper/agnostic, then the -learner would be proper/agnostic as
well.
This general result enables us to improve the best known sample complexity
upper bounds for a variety of important mixture classes. First, we show that
the class of mixtures of axis-aligned Gaussians in is
PAC-learnable in the agnostic setting with
samples, which is tight in and up to logarithmic factors. Second, we
show that the class of mixtures of Gaussians in is
PAC-learnable in the agnostic setting with sample complexity
, which improves the previous known
bounds of and
in its dependence on and . Finally,
we show that the class of mixtures of log-concave distributions over
is PAC-learnable using
samples.Comment: A bug from the previous version, which appeared in AAAI 2018
proceedings, is fixed. 18 page
Learning Geometric Concepts with Nasty Noise
We study the efficient learnability of geometric concept classes -
specifically, low-degree polynomial threshold functions (PTFs) and
intersections of halfspaces - when a fraction of the data is adversarially
corrupted. We give the first polynomial-time PAC learning algorithms for these
concept classes with dimension-independent error guarantees in the presence of
nasty noise under the Gaussian distribution. In the nasty noise model, an
omniscient adversary can arbitrarily corrupt a small fraction of both the
unlabeled data points and their labels. This model generalizes well-studied
noise models, including the malicious noise model and the agnostic (adversarial
label noise) model. Prior to our work, the only concept class for which
efficient malicious learning algorithms were known was the class of
origin-centered halfspaces.
Specifically, our robust learning algorithm for low-degree PTFs succeeds
under a number of tame distributions -- including the Gaussian distribution
and, more generally, any log-concave distribution with (approximately) known
low-degree moments. For LTFs under the Gaussian distribution, we give a
polynomial-time algorithm that achieves error , where
is the noise rate. At the core of our PAC learning results is an efficient
algorithm to approximate the low-degree Chow-parameters of any bounded function
in the presence of nasty noise. To achieve this, we employ an iterative
spectral method for outlier detection and removal, inspired by recent work in
robust unsupervised learning. Our aforementioned algorithm succeeds for a range
of distributions satisfying mild concentration bounds and moment assumptions.
The correctness of our robust learning algorithm for intersections of
halfspaces makes essential use of a novel robust inverse independence lemma
that may be of broader interest
Theory and Applications of Proper Scoring Rules
We give an overview of some uses of proper scoring rules in statistical
inference, including frequentist estimation theory and Bayesian model selection
with improper priors.Comment: 13 page
- …