27 research outputs found
Generalized Mixability via Entropic Duality
Mixability is a property of a loss which characterizes when fast convergence
is possible in the game of prediction with expert advice. We show that a key
property of mixability generalizes, and the exp and log operations present in
the usual theory are not as special as one might have thought. In doing this we
introduce a more general notion of -mixability where is a general
entropy (\ie, any convex function on probabilities). We show how a property
shared by the convex dual of any such entropy yields a natural algorithm (the
minimizer of a regret bound) which, analogous to the classical aggregating
algorithm, is guaranteed a constant regret when used with -mixable
losses. We characterize precisely which have -mixable losses and
put forward a number of conjectures about the optimality and relationships
between different choices of entropy.Comment: 20 pages, 1 figure. Supersedes the work in arXiv:1403.2433 [cs.LG
Generalised Mixability, Constant Regret, and Bayesian Updating
Mixability of a loss is known to characterise when constant regret bounds are
achievable in games of prediction with expert advice through the use of Vovk's
aggregating algorithm. We provide a new interpretation of mixability via convex
analysis that highlights the role of the Kullback-Leibler divergence in its
definition. This naturally generalises to what we call -mixability where
the Bregman divergence replaces the KL divergence. We prove that
losses that are -mixable also enjoy constant regret bounds via a
generalised aggregating algorithm that is similar to mirror descent.Comment: 12 page
Fast rates in statistical and online learning
The speed with which a learning algorithm converges as it is presented with
more data is a central problem in machine learning --- a fast rate of
convergence means less data is needed for the same level of performance. The
pursuit of fast rates in online and statistical learning has led to the
discovery of many conditions in learning theory under which fast learning is
possible. We show that most of these conditions are special cases of a single,
unifying condition, that comes in two forms: the central condition for 'proper'
learning algorithms that always output a hypothesis in the given model, and
stochastic mixability for online algorithms that may make predictions outside
of the model. We show that under surprisingly weak assumptions both conditions
are, in a certain sense, equivalent. The central condition has a
re-interpretation in terms of convexity of a set of pseudoprobabilities,
linking it to density estimation under misspecification. For bounded losses, we
show how the central condition enables a direct proof of fast rates and we
prove its equivalence to the Bernstein condition, itself a generalization of
the Tsybakov margin condition, both of which have played a central role in
obtaining fast rates in statistical learning. Yet, while the Bernstein
condition is two-sided, the central condition is one-sided, making it more
suitable to deal with unbounded losses. In its stochastic mixability form, our
condition generalizes both a stochastic exp-concavity condition identified by
Juditsky, Rigollet and Tsybakov and Vovk's notion of mixability. Our unifying
conditions thus provide a substantial step towards a characterization of fast
rates in statistical learning, similar to how classical mixability
characterizes constant regret in the sequential prediction with expert advice
setting.Comment: 69 pages, 3 figure
Generalized Mixability via Entropic Duality
Mixability is a property of a loss which characterizes when
constant regret is possible in the game of prediction with expert
advice. We show that a key property of mixability generalizes, and
the and operations present in the usual theory are not as
special as one might have thought.
In doing so we introduce a
more general notion of -mixability where is a general
entropy (\ie, any convex function on probabilities). We show how a property
shared by the convex dual of any such entropy yields a natural
algorithm (the minimizer of a regret bound) which, analogous to the
classical Aggregating Algorithm, is guaranteed a constant regret
when used with -mixable losses.
We characterize which have non-trivial -mixable losses and
relate -mixability and its associated Aggregating
Algorithm to potential-based methods, a Blackwell-like
condition, mirror descent, and risk measures from finance.
We also define a notion of ``dominance'' between different
entropies in terms of bounds they guarantee and
conjecture that classical mixability gives optimal bounds, for which we
provide some supporting empirical evidence
Composite multiclass losses
We consider loss functions for multiclass prediction problems. We show when a multiclass loss can be expressed as a “proper composite loss”, which is the composition of a proper loss and a link function. We extend existing results for binary losses to multiclass losses. We subsume results on “classification calibration” by relating it to properness. We determine the stationarity condition, Bregman representation, order-sensitivity, and quasi-convexity of multiclass proper losses. We then characterise the existence and uniqueness of the composite representation formulti class losses. We show how the composite representation is related to other core properties of a loss: mixability, admissibility and (strong) convexity of multiclass losses which we characterise in terms of the Hessian of the Bayes risk. We show that the simple integral representation for binary proper losses can not be extended to multiclass losses but offer concrete guidance regarding how to design different loss functions. The conclusion drawn from these results is that the proper composite representation is a natural and convenient tool for the design of multiclass loss functions
Adaptivity in Online and Statistical Learning
Many modern machine learning algorithms, though successful, are still based on heuristics. In a typical application, such heuristics may manifest in the choice of a specific Neural Network structure, its number of parameters, or the learning rate during training. Relying on these heuristics is not ideal from a computational perspective (often involving multiple runs of the algorithm), and can also lead to over-fitting in some cases. This motivates the following question: for which machine learning tasks/settings do there exist efficient algorithms that automatically adapt to the best parameters? Characterizing the settings where this is the case and designing corresponding (parameter-free) algorithms within the online learning framework constitutes one of this thesis' primary goals. Towards this end, we develop algorithms for constrained and unconstrained online convex optimization that can automatically adapt to various parameters of interest such as the Lipschitz constant, the curvature of the sequence of losses, and the norm of the comparator. We also derive new performance lower-bounds characterizing the limits of adaptivity for algorithms in these settings. Part of systematizing the choice of machine learning methods also involves having ``certificates'' for the performance of algorithms. In the statistical learning setting, this translates to having (tight) generalization bounds. Adaptivity can manifest here through data-dependent bounds that become small whenever the problem is ``easy''. In this thesis, we provide such data-dependent bounds for the expected loss (the standard risk measure) and other risk measures. We also explore how such bounds can be used in the context of risk-monotonicity