67 research outputs found
Temporal variability in implicit online learning
In the setting of online learning, Implicit algorithms turn out to be highly suc-cessful from a practical standpoint. However, the tightest regret analyses onlyshow marginal improvements over Online Mirror Descent. In this work, we shedlight on this behavior carrying out a careful regret analysis. We prove a novelstatic regret bound that depends on the temporal variability of the sequence ofloss functions, a quantity which is often encountered when considering dynamiccompetitors. We show, for example, that the regret can be constant if the tempo-ral variability is constant and the learning rate is tuned appropriately, without theneed of smooth losses. Moreover, we present an adaptive algorithm that achievesthis regret bound without prior knowledge of the temporal variability and prove amatching lower bound. Finally, we validate our theoretical findings on classifica-tion and regression datasets.https://proceedings.neurips.cc/paper/2020/file/9239be5f9dc4058ec647f14fd04b1290-Paper.pdfPublished versio
Online Learning with Multiple Operator-valued Kernels
We consider the problem of learning a vector-valued function f in an online
learning setting. The function f is assumed to lie in a reproducing Hilbert
space of operator-valued kernels. We describe two online algorithms for
learning f while taking into account the output structure. A first contribution
is an algorithm, ONORMA, that extends the standard kernel-based online learning
algorithm NORMA from scalar-valued to operator-valued setting. We report a
cumulative error bound that holds both for classification and regression. We
then define a second algorithm, MONORMA, which addresses the limitation of
pre-defining the output structure in ONORMA by learning sequentially a linear
combination of operator-valued kernels. Our experiments show that the proposed
algorithms achieve good performance results with low computational cost
Fast MLE Computation for the Dirichlet Multinomial
Given a collection of categorical data, we want to find the parameters of a
Dirichlet distribution which maximizes the likelihood of that data. Newton's
method is typically used for this purpose but current implementations require
reading through the entire dataset on each iteration. In this paper, we propose
a modification which requires only a single pass through the dataset and
substantially decreases running time. Furthermore we analyze both theoretically
and empirically the performance of the proposed algorithm, and provide an open
source implementation
Online Local Learning via Semidefinite Programming
In many online learning problems we are interested in predicting local
information about some universe of items. For example, we may want to know
whether two items are in the same cluster rather than computing an assignment
of items to clusters; we may want to know which of two teams will win a game
rather than computing a ranking of teams. Although finding the optimal
clustering or ranking is typically intractable, it may be possible to predict
the relationships between items as well as if you could solve the global
optimization problem exactly.
Formally, we consider an online learning problem in which a learner
repeatedly guesses a pair of labels (l(x), l(y)) and receives an adversarial
payoff depending on those labels. The learner's goal is to receive a payoff
nearly as good as the best fixed labeling of the items. We show that a simple
algorithm based on semidefinite programming can obtain asymptotically optimal
regret in the case where the number of possible labels is O(1), resolving an
open problem posed by Hazan, Kale, and Shalev-Schwartz. Our main technical
contribution is a novel use and analysis of the log determinant regularizer,
exploiting the observation that log det(A + I) upper bounds the entropy of any
distribution with covariance matrix A.Comment: 10 page
- …