12 research outputs found
Efficient Transductive Online Learning via Randomized Rounding
Most traditional online learning algorithms are based on variants of mirror
descent or follow-the-leader. In this paper, we present an online algorithm
based on a completely different approach, tailored for transductive settings,
which combines "random playout" and randomized rounding of loss subgradients.
As an application of our approach, we present the first computationally
efficient online algorithm for collaborative filtering with trace-norm
constrained matrices. As a second application, we solve an open question
linking batch learning and transductive online learningComment: To appear in a Festschrift in honor of V.N. Vapnik. Preliminary
version presented in NIPS 201
Unconstrained Online Linear Learning in Hilbert Spaces: Minimax Algorithms and Normal Approximations
We study algorithms for online linear optimization in Hilbert spaces,
focusing on the case where the player is unconstrained. We develop a novel
characterization of a large class of minimax algorithms, recovering, and even
improving, several previous results as immediate corollaries. Moreover, using
our tools, we develop an algorithm that provides a regret bound of
, where is
the norm of an arbitrary comparator and both and are unknown to
the player. This bound is optimal up to terms. When is
known, we derive an algorithm with an optimal regret bound (up to constant
factors). For both the known and unknown case, a Normal approximation to
the conditional value of the game proves to be the key analysis tool.Comment: Proceedings of the 27th Annual Conference on Learning Theory (COLT
2014
Towards Minimax Online Learning with Unknown Time Horizon
We consider online learning when the time horizon is unknown. We apply a
minimax analysis, beginning with the fixed horizon case, and then moving on to
two unknown-horizon settings, one that assumes the horizon is chosen randomly
according to some known distribution, and the other which allows the adversary
full control over the horizon. For the random horizon setting with restricted
losses, we derive a fully optimal minimax algorithm. And for the adversarial
horizon setting, we prove a nontrivial lower bound which shows that the
adversary obtains strictly more power than when the horizon is fixed and known.
Based on the minimax solution of the random horizon setting, we then propose a
new adaptive algorithm which "pretends" that the horizon is drawn from a
distribution from a special family, but no matter how the actual horizon is
chosen, the worst-case regret is of the optimal rate. Furthermore, our
algorithm can be combined and applied in many ways, for instance, to online
convex optimization, follow the perturbed leader, exponential weights algorithm
and first order bounds. Experiments show that our algorithm outperforms many
other existing algorithms in an online linear optimization setting