3,557 research outputs found
Label optimal regret bounds for online local learning
We resolve an open question from (Christiano, 2014b) posed in COLT'14
regarding the optimal dependency of the regret achievable for online local
learning on the size of the label set. In this framework the algorithm is shown
a pair of items at each step, chosen from a set of items. The learner then
predicts a label for each item, from a label set of size and receives a
real valued payoff. This is a natural framework which captures many interesting
scenarios such as collaborative filtering, online gambling, and online max cut
among others. (Christiano, 2014a) designed an efficient online learning
algorithm for this problem achieving a regret of , where
is the number of rounds. Information theoretically, one can achieve a regret of
. One of the main open questions left in this framework
concerns closing the above gap.
In this work, we provide a complete answer to the question above via two main
results. We show, via a tighter analysis, that the semi-definite programming
based algorithm of (Christiano, 2014a), in fact achieves a regret of
. Second, we show a matching computational lower bound. Namely,
we show that a polynomial time algorithm for online local learning with lower
regret would imply a polynomial time algorithm for the planted clique problem
which is widely believed to be hard. We prove a similar hardness result under a
related conjecture concerning planted dense subgraphs that we put forth. Unlike
planted clique, the planted dense subgraph problem does not have any known
quasi-polynomial time algorithms.
Computational lower bounds for online learning are relatively rare, and we
hope that the ideas developed in this work will lead to lower bounds for other
online learning scenarios as well.Comment: 13 pages; Changes from previous version: small changes to proofs of
Theorems 1 & 2, a small rewrite of introduction as well (this version is the
same as camera-ready copy in COLT '15
Distributed Online Big Data Classification Using Context Information
Distributed, online data mining systems have emerged as a result of
applications requiring analysis of large amounts of correlated and
high-dimensional data produced by multiple distributed data sources. We propose
a distributed online data classification framework where data is gathered by
distributed data sources and processed by a heterogeneous set of distributed
learners which learn online, at run-time, how to classify the different data
streams either by using their locally available classification functions or by
helping each other by classifying each other's data. Importantly, since the
data is gathered at different locations, sending the data to another learner to
process incurs additional costs such as delays, and hence this will be only
beneficial if the benefits obtained from a better classification will exceed
the costs. We model the problem of joint classification by the distributed and
heterogeneous learners from multiple data sources as a distributed contextual
bandit problem where each data is characterized by a specific context. We
develop a distributed online learning algorithm for which we can prove
sublinear regret. Compared to prior work in distributed online data mining, our
work is the first to provide analytic regret results characterizing the
performance of the proposed algorithm
Online Local Learning via Semidefinite Programming
In many online learning problems we are interested in predicting local
information about some universe of items. For example, we may want to know
whether two items are in the same cluster rather than computing an assignment
of items to clusters; we may want to know which of two teams will win a game
rather than computing a ranking of teams. Although finding the optimal
clustering or ranking is typically intractable, it may be possible to predict
the relationships between items as well as if you could solve the global
optimization problem exactly.
Formally, we consider an online learning problem in which a learner
repeatedly guesses a pair of labels (l(x), l(y)) and receives an adversarial
payoff depending on those labels. The learner's goal is to receive a payoff
nearly as good as the best fixed labeling of the items. We show that a simple
algorithm based on semidefinite programming can obtain asymptotically optimal
regret in the case where the number of possible labels is O(1), resolving an
open problem posed by Hazan, Kale, and Shalev-Schwartz. Our main technical
contribution is a novel use and analysis of the log determinant regularizer,
exploiting the observation that log det(A + I) upper bounds the entropy of any
distribution with covariance matrix A.Comment: 10 page
- β¦