80 research outputs found
Domain Adaptation of Majority Votes via Perturbed Variation-based Label Transfer
We tackle the PAC-Bayesian Domain Adaptation (DA) problem. This arrives when
one desires to learn, from a source distribution, a good weighted majority vote
(over a set of classifiers) on a different target distribution. In this
context, the disagreement between classifiers is known crucial to control. In
non-DA supervised setting, a theoretical bound - the C-bound - involves this
disagreement and leads to a majority vote learning algorithm: MinCq. In this
work, we extend MinCq to DA by taking advantage of an elegant divergence
between distribution called the Perturbed Varation (PV). Firstly, justified by
a new formulation of the C-bound, we provide to MinCq a target sample labeled
thanks to a PV-based self-labeling focused on regions where the source and
target marginal distributions are closer. Secondly, we propose an original
process for tuning the hyperparameters. Our framework shows very promising
results on a toy problem
PAC-Bayesian Analysis of the Exploration-Exploitation Trade-off
We develop a coherent framework for integrative simultaneous analysis of the
exploration-exploitation and model order selection trade-offs. We improve over
our preceding results on the same subject (Seldin et al., 2011) by combining
PAC-Bayesian analysis with Bernstein-type inequality for martingales. Such a
combination is also of independent interest for studies of multiple
simultaneously evolving martingales.Comment: On-line Trading of Exploration and Exploitation 2 - ICML-2011
workshop. http://explo.cs.ucl.ac.uk/workshop
On Measure Concentration of Random Maximum A-Posteriori Perturbations
The maximum a-posteriori (MAP) perturbation framework has emerged as a useful
approach for inference and learning in high dimensional complex models. By
maximizing a randomly perturbed potential function, MAP perturbations generate
unbiased samples from the Gibbs distribution. Unfortunately, the computational
cost of generating so many high-dimensional random variables can be
prohibitive. More efficient algorithms use sequential sampling strategies based
on the expected value of low dimensional MAP perturbations. This paper develops
new measure concentration inequalities that bound the number of samples needed
to estimate such expected values. Applying the general result to MAP
perturbations can yield a more efficient algorithm to approximate sampling from
the Gibbs distribution. The measure concentration result is of general interest
and may be applicable to other areas involving expected estimations
PAC-Bayesian Analysis of Martingales and Multiarmed Bandits
We present two alternative ways to apply PAC-Bayesian analysis to sequences
of dependent random variables. The first is based on a new lemma that enables
to bound expectations of convex functions of certain dependent random variables
by expectations of the same functions of independent Bernoulli random
variables. This lemma provides an alternative tool to Hoeffding-Azuma
inequality to bound concentration of martingale values. Our second approach is
based on integration of Hoeffding-Azuma inequality with PAC-Bayesian analysis.
We also introduce a way to apply PAC-Bayesian analysis in situation of limited
feedback. We combine the new tools to derive PAC-Bayesian generalization and
regret bounds for the multiarmed bandit problem. Although our regret bound is
not yet as tight as state-of-the-art regret bounds based on other
well-established techniques, our results significantly expand the range of
potential applications of PAC-Bayesian analysis and introduce a new analysis
tool to reinforcement learning and many other fields, where martingales and
limited feedback are encountered
Domain adaptation of weighted majority votes via perturbed variation-based self-labeling
In machine learning, the domain adaptation problem arrives when the test
(target) and the train (source) data are generated from different
distributions. A key applied issue is thus the design of algorithms able to
generalize on a new distribution, for which we have no label information. We
focus on learning classification models defined as a weighted majority vote
over a set of real-val ued functions. In this context, Germain et al. (2013)
have shown that a measure of disagreement between these functions is crucial to
control. The core of this measure is a theoretical bound--the C-bound (Lacasse
et al., 2007)--which involves the disagreement and leads to a well performing
majority vote learning algorithm in usual non-adaptative supervised setting:
MinCq. In this work, we propose a framework to extend MinCq to a domain
adaptation scenario. This procedure takes advantage of the recent perturbed
variation divergence between distributions proposed by Harel and Mannor (2012).
Justified by a theoretical bound on the target risk of the vote, we provide to
MinCq a target sample labeled thanks to a perturbed variation-based
self-labeling focused on the regions where the source and target marginals
appear similar. We also study the influence of our self-labeling, from which we
deduce an original process for tuning the hyperparameters. Finally, our
framework called PV-MinCq shows very promising results on a rotation and
translation synthetic problem
A PAC-Bayesian bound for Lifelong Learning
Transfer learning has received a lot of attention in the machine learning
community over the last years, and several effective algorithms have been
developed. However, relatively little is known about their theoretical
properties, especially in the setting of lifelong learning, where the goal is
to transfer information to tasks for which no data have been observed so far.
In this work we study lifelong learning from a theoretical perspective. Our
main result is a PAC-Bayesian generalization bound that offers a unified view
on existing paradigms for transfer learning, such as the transfer of parameters
or the transfer of low-dimensional representations. We also use the bound to
derive two principled lifelong learning algorithms, and we show that these
yield results comparable with existing methods.Comment: to appear at ICML 201
Non-Vacuous Generalization Bounds at the ImageNet Scale: A PAC-Bayesian Compression Approach
Modern neural networks are highly overparameterized, with capacity to
substantially overfit to training data. Nevertheless, these networks often
generalize well in practice. It has also been observed that trained networks
can often be "compressed" to much smaller representations. The purpose of this
paper is to connect these two empirical observations. Our main technical result
is a generalization bound for compressed networks based on the compressed size.
Combined with off-the-shelf compression algorithms, the bound leads to state of
the art generalization guarantees; in particular, we provide the first
non-vacuous generalization guarantees for realistic architectures applied to
the ImageNet classification problem. As additional evidence connecting
compression and generalization, we show that compressibility of models that
tend to overfit is limited: We establish an absolute limit on expected
compressibility as a function of expected generalization error, where the
expectations are over the random choice of training examples. The bounds are
complemented by empirical results that show an increase in overfitting implies
an increase in the number of bits required to describe a trained network.Comment: 16 pages, 1 figure. Accepted at ICLR 201
- …