15 research outputs found
Chaining Meets Chain Rule: Multilevel Entropic Regularization and Training of Neural Nets
We derive generalization and excess risk bounds for neural nets using a
family of complexity measures based on a multilevel relative entropy. The
bounds are obtained by introducing the notion of generated hierarchical
coverings of neural nets and by using the technique of chaining mutual
information introduced in Asadi et al. NeurIPS'18. The resulting bounds are
algorithm-dependent and exploit the multilevel structure of neural nets. This,
in turn, leads to an empirical risk minimization problem with a multilevel
entropic regularization. The minimization problem is resolved by introducing a
multi-scale generalization of the celebrated Gibbs posterior distribution,
proving that the derived distribution achieves the unique minimum. This leads
to a new training procedure for neural nets with performance guarantees, which
exploits the chain rule of relative entropy rather than the chain rule of
derivatives (as in backpropagation). To obtain an efficient implementation of
the latter, we further develop a multilevel Metropolis algorithm simulating the
multi-scale Gibbs distribution, with an experiment for a two-layer neural net
on the MNIST data set.Comment: 30 pages, 3 figure
Optimal No-regret Learning in Repeated First-price Auctions
We study online learning in repeated first-price auctions with censored
feedback, where a bidder, only observing the winning bid at the end of each
auction, learns to adaptively bid in order to maximize her cumulative payoff.
To achieve this goal, the bidder faces a challenging dilemma: if she wins the
bid--the only way to achieve positive payoffs--then she is not able to observe
the highest bid of the other bidders, which we assume is iid drawn from an
unknown distribution. This dilemma, despite being reminiscent of the
exploration-exploitation trade-off in contextual bandits, cannot directly be
addressed by the existing UCB or Thompson sampling algorithms in that
literature, mainly because contrary to the standard bandits setting, when a
positive reward is obtained here, nothing about the environment can be learned.
In this paper, by exploiting the structural properties of first-price
auctions, we develop the first learning algorithm that achieves
regret bound when the bidder's private values are
stochastically generated. We do so by providing an algorithm on a general class
of problems, which we call monotone group contextual bandits, where the same
regret bound is established under stochastically generated contexts. Further,
by a novel lower bound argument, we characterize an lower
bound for the case where the contexts are adversarially generated, thus
highlighting the impact of the contexts generation mechanism on the fundamental
learning limit. Despite this, we further exploit the structure of first-price
auctions and develop a learning algorithm that operates sample-efficiently (and
computationally efficiently) in the presence of adversarially generated private
values. We establish an regret bound for this algorithm,
hence providing a complete characterization of optimal learning guarantees for
this problem
Online learning with kernel losses
We present a generalization of the adversarial linear bandits framework,
where the underlying losses are kernel functions (with an associated
reproducing kernel Hilbert space) rather than linear functions. We study a
version of the exponential weights algorithm and bound its regret in this
setting. Under conditions on the eigendecay of the kernel we provide a sharp
characterization of the regret for this algorithm. When we have polynomial
eigendecay , we find that the regret is
bounded by ; while under
the assumption of exponential eigendecay , we get an even tighter bound on the regret . We also study the full information setting
when the underlying losses are kernel functions and present an adapted
exponential weights algorithm and a conditional gradient descent algorithm.Comment: 40 pages, 4 figure