378 research outputs found
Nonparametric Stochastic Contextual Bandits
We analyze the -armed bandit problem where the reward for each arm is a
noisy realization based on an observed context under mild nonparametric
assumptions. We attain tight results for top-arm identification and a sublinear
regret of , where is the
context dimension, for a modified UCB algorithm that is simple to implement
(NN-UCB). We then give global intrinsic dimension dependent and ambient
dimension independent regret bounds. We also discuss recovering topological
structures within the context space based on expected bandit performance and
provide an extension to infinite-armed contextual bandits. Finally, we
experimentally show the improvement of our algorithm over existing multi-armed
bandit approaches for both simulated tasks and MNIST image classification.Comment: AAAI 201
PAC-Bayesian Analysis of the Exploration-Exploitation Trade-off
We develop a coherent framework for integrative simultaneous analysis of the
exploration-exploitation and model order selection trade-offs. We improve over
our preceding results on the same subject (Seldin et al., 2011) by combining
PAC-Bayesian analysis with Bernstein-type inequality for martingales. Such a
combination is also of independent interest for studies of multiple
simultaneously evolving martingales.Comment: On-line Trading of Exploration and Exploitation 2 - ICML-2011
workshop. http://explo.cs.ucl.ac.uk/workshop
Contextual Bandits with Cross-learning
In the classical contextual bandits problem, in each round , a learner
observes some context , chooses some action to perform, and receives
some reward . We consider the variant of this problem where in
addition to receiving the reward , the learner also learns the
values of for all other contexts ; i.e., the rewards that
would have been achieved by performing that action under different contexts.
This variant arises in several strategic settings, such as learning how to bid
in non-truthful repeated auctions (in this setting the context is the decision
maker's private valuation for each auction). We call this problem the
contextual bandits problem with cross-learning. The best algorithms for the
classical contextual bandits problem achieve regret
against all stationary policies, where is the number of contexts, the
number of actions, and the number of rounds. We demonstrate algorithms for
the contextual bandits problem with cross-learning that remove the dependence
on and achieve regret (when contexts are stochastic with
known distribution), (when contexts are stochastic
with unknown distribution), and (when contexts are
adversarial but rewards are stochastic).Comment: 48 pages, 5 figure
- …