8,481 research outputs found
Mirror Descent Meets Fixed Share (and feels no regret)
Mirror descent with an entropic regularizer is known to achieve shifting
regret bounds that are logarithmic in the dimension. This is done using either
a carefully designed projection or by a weight sharing technique. Via a novel
unified analysis, we show that these two approaches deliver essentially
equivalent bounds on a notion of regret generalizing shifting, adaptive,
discounted, and other related regrets. Our analysis also captures and extends
the generalized weight sharing technique of Bousquet and Warmuth, and can be
refined in several ways, including improvements for small losses and adaptive
tuning of parameters
Shifting Regret, Mirror Descent, and Matrices
We consider the problem of online prediction in changing environments. In this framework the performance of a predictor is evaluated as the loss relative to an arbitrarily changing predictor, whose individual components come from a base class of predictors. Typical results in the literature consider different base classes (experts, linear predictors on the simplex, etc.) separately. Introducing an arbitrary mapping inside the mirror decent algorithm, we provide a framework that unifies and extends existing results. As an example, we prove new shifting regret bounds for matrix prediction problems
A Stochastic Interpretation of Stochastic Mirror Descent: Risk-Sensitive Optimality
Stochastic mirror descent (SMD) is a fairly new family of algorithms that has
recently found a wide range of applications in optimization, machine learning,
and control. It can be considered a generalization of the classical stochastic
gradient algorithm (SGD), where instead of updating the weight vector along the
negative direction of the stochastic gradient, the update is performed in a
"mirror domain" defined by the gradient of a (strictly convex) potential
function. This potential function, and the mirror domain it yields, provides
considerable flexibility in the algorithm compared to SGD. While many
properties of SMD have already been obtained in the literature, in this paper
we exhibit a new interpretation of SMD, namely that it is a risk-sensitive
optimal estimator when the unknown weight vector and additive noise are
non-Gaussian and belong to the exponential family of distributions. The
analysis also suggests a modified version of SMD, which we refer to as
symmetric SMD (SSMD). The proofs rely on some simple properties of Bregman
divergence, which allow us to extend results from quadratics and Gaussians to
certain convex functions and exponential families in a rather seamless way
Online Learning for Changing Environments using Coin Betting
A key challenge in online learning is that classical algorithms can be slow
to adapt to changing environments. Recent studies have proposed "meta"
algorithms that convert any online learning algorithm to one that is adaptive
to changing environments, where the adaptivity is analyzed in a quantity called
the strongly-adaptive regret. This paper describes a new meta algorithm that
has a strongly-adaptive regret bound that is a factor of
better than other algorithms with the same time complexity, where is the
time horizon. We also extend our algorithm to achieve a first-order (i.e.,
dependent on the observed losses) strongly-adaptive regret bound for the first
time, to our knowledge. At its heart is a new parameter-free algorithm for the
learning with expert advice (LEA) problem in which experts sometimes do not
output advice for consecutive time steps (i.e., \emph{sleeping} experts). This
algorithm is derived by a reduction from optimal algorithms for the so-called
coin betting problem. Empirical results show that our algorithm outperforms
state-of-the-art methods in both learning with expert advice and metric
learning scenarios.Comment: submitted to a journal. arXiv admin note: substantial text overlap
with arXiv:1610.0457
- …