1,370 research outputs found
Prior-free and prior-dependent regret bounds for Thompson Sampling
We consider the stochastic multi-armed bandit problem with a prior
distribution on the reward distributions. We are interested in studying
prior-free and prior-dependent regret bounds, very much in the same spirit as
the usual distribution-free and distribution-dependent bounds for the
non-Bayesian stochastic bandit. Building on the techniques of Audibert and
Bubeck [2009] and Russo and Roy [2013] we first show that Thompson Sampling
attains an optimal prior-free bound in the sense that for any prior
distribution its Bayesian regret is bounded from above by . This
result is unimprovable in the sense that there exists a prior distribution such
that any algorithm has a Bayesian regret bounded from below by . We also study the case of priors for the setting of Bubeck et al.
[2013] (where the optimal mean is known as well as a lower bound on the
smallest gap) and we show that in this case the regret of Thompson Sampling is
in fact uniformly bounded over time, thus showing that Thompson Sampling can
greatly take advantage of the nice properties of these priors.Comment: A previous version appeared under the title 'A note on the Bayesian
regret of Thompson Sampling with an arbitrary prior
Optimality of Thompson Sampling for Gaussian Bandits Depends on Priors
In stochastic bandit problems, a Bayesian policy called Thompson sampling
(TS) has recently attracted much attention for its excellent empirical
performance. However, the theoretical analysis of this policy is difficult and
its asymptotic optimality is only proved for one-parameter models. In this
paper we discuss the optimality of TS for the model of normal distributions
with unknown means and variances as one of the most fundamental example of
multiparameter models. First we prove that the expected regret of TS with the
uniform prior achieves the theoretical bound, which is the first result to show
that the asymptotic bound is achievable for the normal distribution model. Next
we prove that TS with Jeffreys prior and reference prior cannot achieve the
theoretical bound. Therefore the choice of priors is important for TS and
non-informative priors are sometimes risky in cases of multiparameter models
An Information-Theoretic Analysis of Thompson Sampling
We provide an information-theoretic analysis of Thompson sampling that
applies across a broad range of online optimization problems in which a
decision-maker must learn from partial feedback. This analysis inherits the
simplicity and elegance of information theory and leads to regret bounds that
scale with the entropy of the optimal-action distribution. This strengthens
preexisting results and yields new insight into how information improves
performance
- …