26 research outputs found
Generalized Mixability via Entropic Duality
Mixability is a property of a loss which characterizes when fast convergence
is possible in the game of prediction with expert advice. We show that a key
property of mixability generalizes, and the exp and log operations present in
the usual theory are not as special as one might have thought. In doing this we
introduce a more general notion of -mixability where is a general
entropy (\ie, any convex function on probabilities). We show how a property
shared by the convex dual of any such entropy yields a natural algorithm (the
minimizer of a regret bound) which, analogous to the classical aggregating
algorithm, is guaranteed a constant regret when used with -mixable
losses. We characterize precisely which have -mixable losses and
put forward a number of conjectures about the optimality and relationships
between different choices of entropy.Comment: 20 pages, 1 figure. Supersedes the work in arXiv:1403.2433 [cs.LG
Generalised Mixability, Constant Regret, and Bayesian Updating
Mixability of a loss is known to characterise when constant regret bounds are
achievable in games of prediction with expert advice through the use of Vovk's
aggregating algorithm. We provide a new interpretation of mixability via convex
analysis that highlights the role of the Kullback-Leibler divergence in its
definition. This naturally generalises to what we call -mixability where
the Bregman divergence replaces the KL divergence. We prove that
losses that are -mixable also enjoy constant regret bounds via a
generalised aggregating algorithm that is similar to mirror descent.Comment: 12 page
Generalized Mixability via Entropic Duality
Mixability is a property of a loss which characterizes when
constant regret is possible in the game of prediction with expert
advice. We show that a key property of mixability generalizes, and
the and operations present in the usual theory are not as
special as one might have thought.
In doing so we introduce a
more general notion of -mixability where is a general
entropy (\ie, any convex function on probabilities). We show how a property
shared by the convex dual of any such entropy yields a natural
algorithm (the minimizer of a regret bound) which, analogous to the
classical Aggregating Algorithm, is guaranteed a constant regret
when used with -mixable losses.
We characterize which have non-trivial -mixable losses and
relate -mixability and its associated Aggregating
Algorithm to potential-based methods, a Blackwell-like
condition, mirror descent, and risk measures from finance.
We also define a notion of ``dominance'' between different
entropies in terms of bounds they guarantee and
conjecture that classical mixability gives optimal bounds, for which we
provide some supporting empirical evidence
Fast rates in statistical and online learning
The speed with which a learning algorithm converges as it is presented with
more data is a central problem in machine learning --- a fast rate of
convergence means less data is needed for the same level of performance. The
pursuit of fast rates in online and statistical learning has led to the
discovery of many conditions in learning theory under which fast learning is
possible. We show that most of these conditions are special cases of a single,
unifying condition, that comes in two forms: the central condition for 'proper'
learning algorithms that always output a hypothesis in the given model, and
stochastic mixability for online algorithms that may make predictions outside
of the model. We show that under surprisingly weak assumptions both conditions
are, in a certain sense, equivalent. The central condition has a
re-interpretation in terms of convexity of a set of pseudoprobabilities,
linking it to density estimation under misspecification. For bounded losses, we
show how the central condition enables a direct proof of fast rates and we
prove its equivalence to the Bernstein condition, itself a generalization of
the Tsybakov margin condition, both of which have played a central role in
obtaining fast rates in statistical learning. Yet, while the Bernstein
condition is two-sided, the central condition is one-sided, making it more
suitable to deal with unbounded losses. In its stochastic mixability form, our
condition generalizes both a stochastic exp-concavity condition identified by
Juditsky, Rigollet and Tsybakov and Vovk's notion of mixability. Our unifying
conditions thus provide a substantial step towards a characterization of fast
rates in statistical learning, similar to how classical mixability
characterizes constant regret in the sequential prediction with expert advice
setting.Comment: 69 pages, 3 figure
Transitions, Losses, and Re-parameterizations: Elements of Prediction Games
This thesis presents some geometric insights into three different
types of two-player prediction games – namely general learning
task, prediction with expert advice, and online convex
optimization. These games differ in the nature of the opponent
(stochastic, adversarial, or intermediate), the order of the
players' move, and the utility function. The insights shed some
light on the understanding of the intrinsic barriers of the
prediction problems and the design of computationally efficient
learning algorithms with strong theoretical guarantees (such as
generalizability, statistical consistency, and constant regret
etc.). The main contributions of the thesis are:
• Leveraging concepts from statistical decision theory, we
develop a necessary toolkit for formalizing the prediction games
mentioned above and quantifying the objective of them.
• We investigate the cost-sensitive classification problem
which is an instantiation of the general learning task, and
demonstrate the hardness of this problem by producing the lower
bounds on the minimax risk of it.
Then we analyse the impact of imposing constraints (such as
corruption level, and privacy requirements etc.) on the general
learning task. This naturally leads us to further investigation
of strong data processing inequalities which is a fundamental
concept in information theory.
Furthermore, by extending the hypothesis testing interpretation
of standard privacy definitions, we propose an asymmetric
(prioritized) privacy definition.
• We study efficient merging schemes for prediction with expert
advice problem and the geometric properties (mixability and
exp-concavity) of the loss functions that guarantee constant
regret bounds. As a result of our study, we construct two types
of link functions (one using calculus approach and another using
geometric approach) that can re-parameterize any binary mixable
loss into an exp-concave loss.
• We focus on some recent algorithms for online convex
optimization, which exploit the easy nature of the data (such as
sparsity, predictable sequences, and curved losses) in order to
achieve better regret bound while ensuring the protection against
the worst case scenario. We unify some of these existing
techniques to obtain new update rules for the cases when these
easy instances occur together, and analyse the regret bounds of
them
Adaptivity in Online and Statistical Learning
Many modern machine learning algorithms, though successful, are still based on heuristics. In a typical application, such heuristics may manifest in the choice of a specific Neural Network structure, its number of parameters, or the learning rate during training. Relying on these heuristics is not ideal from a computational perspective (often involving multiple runs of the algorithm), and can also lead to over-fitting in some cases. This motivates the following question: for which machine learning tasks/settings do there exist efficient algorithms that automatically adapt to the best parameters? Characterizing the settings where this is the case and designing corresponding (parameter-free) algorithms within the online learning framework constitutes one of this thesis' primary goals. Towards this end, we develop algorithms for constrained and unconstrained online convex optimization that can automatically adapt to various parameters of interest such as the Lipschitz constant, the curvature of the sequence of losses, and the norm of the comparator. We also derive new performance lower-bounds characterizing the limits of adaptivity for algorithms in these settings. Part of systematizing the choice of machine learning methods also involves having ``certificates'' for the performance of algorithms. In the statistical learning setting, this translates to having (tight) generalization bounds. Adaptivity can manifest here through data-dependent bounds that become small whenever the problem is ``easy''. In this thesis, we provide such data-dependent bounds for the expected loss (the standard risk measure) and other risk measures. We also explore how such bounds can be used in the context of risk-monotonicity