67 research outputs found
Information Directed Sampling for Stochastic Bandits with Graph Feedback
We consider stochastic multi-armed bandit problems with graph feedback, where
the decision maker is allowed to observe the neighboring actions of the chosen
action. We allow the graph structure to vary with time and consider both
deterministic and Erd\H{o}s-R\'enyi random graph models. For such a graph
feedback model, we first present a novel analysis of Thompson sampling that
leads to tighter performance bound than existing work. Next, we propose new
Information Directed Sampling based policies that are graph-aware in their
decision making. Under the deterministic graph case, we establish a Bayesian
regret bound for the proposed policies that scales with the clique cover number
of the graph instead of the number of actions. Under the random graph case, we
provide a Bayesian regret bound for the proposed policies that scales with the
ratio of the number of actions over the expected number of observations per
iteration. To the best of our knowledge, this is the first analytical result
for stochastic bandits with random graph feedback. Finally, using numerical
evaluations, we demonstrate that our proposed IDS policies outperform existing
approaches, including adaptions of upper confidence bound, -greedy
and Exp3 algorithms.Comment: Accepted by AAAI 201
Multi-scale exploration of convex functions and bandit convex optimization
We construct a new map from a convex function to a distribution on its
domain, with the property that this distribution is a multi-scale exploration
of the function. We use this map to solve a decade-old open problem in
adversarial bandit convex optimization by showing that the minimax regret for
this problem is , where is the
dimension and the number of rounds. This bound is obtained by studying the
dual Bayesian maximin regret via the information ratio analysis of Russo and
Van Roy, and then using the multi-scale exploration to solve the Bayesian
problem.Comment: Preliminary version; 22 page
- …