2,699 research outputs found
Characterizing the Sample Complexity of Private Learners
In 2008, Kasiviswanathan et al. defined private learning as a combination of
PAC learning and differential privacy. Informally, a private learner is applied
to a collection of labeled individual information and outputs a hypothesis
while preserving the privacy of each individual. Kasiviswanathan et al. gave a
generic construction of private learners for (finite) concept classes, with
sample complexity logarithmic in the size of the concept class. This sample
complexity is higher than what is needed for non-private learners, hence
leaving open the possibility that the sample complexity of private learning may
be sometimes significantly higher than that of non-private learning.
We give a combinatorial characterization of the sample size sufficient and
necessary to privately learn a class of concepts. This characterization is
analogous to the well known characterization of the sample complexity of
non-private learning in terms of the VC dimension of the concept class. We
introduce the notion of probabilistic representation of a concept class, and
our new complexity measure RepDim corresponds to the size of the smallest
probabilistic representation of the concept class.
We show that any private learning algorithm for a concept class C with sample
complexity m implies RepDim(C)=O(m), and that there exists a private learning
algorithm with sample complexity m=O(RepDim(C)). We further demonstrate that a
similar characterization holds for the database size needed for privately
computing a large class of optimization problems and also for the well studied
problem of private data release
A Survey of Quantum Learning Theory
This paper surveys quantum learning theory: the theoretical aspects of
machine learning using quantum computers. We describe the main results known
for three models of learning: exact learning from membership queries, and
Probably Approximately Correct (PAC) and agnostic learning from classical or
quantum examples.Comment: 26 pages LaTeX. v2: many small changes to improve the presentation.
This version will appear as Complexity Theory Column in SIGACT News in June
2017. v3: fixed a small ambiguity in the definition of gamma(C) and updated a
referenc
Predictive PAC Learning and Process Decompositions
We informally call a stochastic process learnable if it admits a
generalization error approaching zero in probability for any concept class with
finite VC-dimension (IID processes are the simplest example). A mixture of
learnable processes need not be learnable itself, and certainly its
generalization error need not decay at the same rate. In this paper, we argue
that it is natural in predictive PAC to condition not on the past observations
but on the mixture component of the sample path. This definition not only
matches what a realistic learner might demand, but also allows us to sidestep
several otherwise grave problems in learning from dependent data. In
particular, we give a novel PAC generalization bound for mixtures of learnable
processes with a generalization error that is not worse than that of each
mixture component. We also provide a characterization of mixtures of absolutely
regular (-mixing) processes, of independent probability-theoretic
interest.Comment: 9 pages, accepted in NIPS 201
Quantum machine learning: a classical perspective
Recently, increased computational power and data availability, as well as
algorithmic advances, have led machine learning techniques to impressive
results in regression, classification, data-generation and reinforcement
learning tasks. Despite these successes, the proximity to the physical limits
of chip fabrication alongside the increasing size of datasets are motivating a
growing number of researchers to explore the possibility of harnessing the
power of quantum computation to speed-up classical machine learning algorithms.
Here we review the literature in quantum machine learning and discuss
perspectives for a mixed readership of classical machine learning and quantum
computation experts. Particular emphasis will be placed on clarifying the
limitations of quantum algorithms, how they compare with their best classical
counterparts and why quantum resources are expected to provide advantages for
learning problems. Learning in the presence of noise and certain
computationally hard problems in machine learning are identified as promising
directions for the field. Practical questions, like how to upload classical
data into quantum form, will also be addressed.Comment: v3 33 pages; typos corrected and references adde
Sample-Efficient Learning of Mixtures
We consider PAC learning of probability distributions (a.k.a. density
estimation), where we are given an i.i.d. sample generated from an unknown
target distribution, and want to output a distribution that is close to the
target in total variation distance. Let be an arbitrary class of
probability distributions, and let denote the class of
-mixtures of elements of . Assuming the existence of a method
for learning with sample complexity ,
we provide a method for learning with sample complexity
. Our mixture
learning algorithm has the property that, if the -learner is
proper/agnostic, then the -learner would be proper/agnostic as
well.
This general result enables us to improve the best known sample complexity
upper bounds for a variety of important mixture classes. First, we show that
the class of mixtures of axis-aligned Gaussians in is
PAC-learnable in the agnostic setting with
samples, which is tight in and up to logarithmic factors. Second, we
show that the class of mixtures of Gaussians in is
PAC-learnable in the agnostic setting with sample complexity
, which improves the previous known
bounds of and
in its dependence on and . Finally,
we show that the class of mixtures of log-concave distributions over
is PAC-learnable using
samples.Comment: A bug from the previous version, which appeared in AAAI 2018
proceedings, is fixed. 18 page
Empirical Risk Minimization with Approximations of Probabilistic Grammars
Probabilistic grammars are generative statistical models that are useful for compositional and sequential structures. We present a framework, reminiscent of structural risk minimization, for empirical risk minimization of the parameters of a fixed probabilistic grammar using the log-loss. We derive sample complexity bounds in this framework that apply both to the supervised setting and the unsupervised setting.
- …