317 research outputs found
On Kernelized Multi-armed Bandits
We consider the stochastic bandit problem with a continuous set of arms, with
the expected reward function over the arms assumed to be fixed but unknown. We
provide two new Gaussian process-based algorithms for continuous bandit
optimization-Improved GP-UCB (IGP-UCB) and GP-Thomson sampling (GP-TS), and
derive corresponding regret bounds. Specifically, the bounds hold when the
expected reward function belongs to the reproducing kernel Hilbert space (RKHS)
that naturally corresponds to a Gaussian process kernel used as input by the
algorithms. Along the way, we derive a new self-normalized concentration
inequality for vector- valued martingales of arbitrary, possibly infinite,
dimension. Finally, experimental evaluation and comparisons to existing
algorithms on synthetic and real-world environments are carried out that
highlight the favorable gains of the proposed strategies in many cases
Perturbed-History Exploration in Stochastic Linear Bandits
We propose a new online algorithm for minimizing the cumulative regret in
stochastic linear bandits. The key idea is to build a perturbed history, which
mixes the history of observed rewards with a pseudo-history of randomly
generated i.i.d. pseudo-rewards. Our algorithm, perturbed-history exploration
in a linear bandit (LinPHE), estimates a linear model from its perturbed
history and pulls the arm with the highest value under that model. We prove a
gap-free bound on the expected -round regret of
LinPHE, where is the number of features. Our analysis relies on novel
concentration and anti-concentration bounds on the weighted sum of Bernoulli
random variables. To show the generality of our design, we extend LinPHE to a
logistic reward model. We evaluate both algorithms empirically and show that
they are practical
Factored Bandits
We introduce the factored bandits model, which is a framework for learning
with limited (bandit) feedback, where actions can be decomposed into a
Cartesian product of atomic actions. Factored bandits incorporate rank-1
bandits as a special case, but significantly relax the assumptions on the form
of the reward function. We provide an anytime algorithm for stochastic factored
bandits and up to constants matching upper and lower regret bounds for the
problem. Furthermore, we show that with a slight modification the proposed
algorithm can be applied to utility based dueling bandits. We obtain an
improvement in the additive terms of the regret bound compared to state of the
art algorithms (the additive terms are dominating up to time horizons which are
exponential in the number of arms)
- …