106 research outputs found
An Information-Theoretic Analysis of Thompson Sampling
We provide an information-theoretic analysis of Thompson sampling that
applies across a broad range of online optimization problems in which a
decision-maker must learn from partial feedback. This analysis inherits the
simplicity and elegance of information theory and leads to regret bounds that
scale with the entropy of the optimal-action distribution. This strengthens
preexisting results and yields new insight into how information improves
performance
First-Order Regret Analysis of Thompson Sampling
We address online combinatorial optimization when the player has a prior over
the adversary's sequence of losses. In this framework, Russo and Van Roy
proposed an information-theoretic analysis of Thompson Sampling based on the
{\em information ratio}, resulting in optimal worst-case regret bounds. In this
paper we introduce three novel ideas to this line of work. First we propose a
new quantity, the scale-sensitive information ratio, which allows us to obtain
more refined first-order regret bounds (i.e., bounds of the form
where is the loss of the best combinatorial action). Second we replace
the entropy over combinatorial actions by a coordinate entropy, which allows us
to obtain the first optimal worst-case bound for Thompson Sampling in the
combinatorial setting. Finally, we introduce a novel link between Bayesian
agents and frequentist confidence intervals. Combining these ideas we show that
the classical multi-armed bandit first-order regret bound still holds true in the more challenging and more general semi-bandit
scenario. This latter result improves the previous state of the art bound
by Lykouris, Sridharan and Tardos.Comment: 27 page
Lower Bounds on Regret for Noisy Gaussian Process Bandit Optimization
In this paper, we consider the problem of sequentially optimizing a black-box
function based on noisy samples and bandit feedback. We assume that is
smooth in the sense of having a bounded norm in some reproducing kernel Hilbert
space (RKHS), yielding a commonly-considered non-Bayesian form of Gaussian
process bandit optimization. We provide algorithm-independent lower bounds on
the simple regret, measuring the suboptimality of a single point reported after
rounds, and on the cumulative regret, measuring the sum of regrets over the
chosen points. For the isotropic squared-exponential kernel in
dimensions, we find that an average simple regret of requires , and the
average cumulative regret is at least , thus matching existing upper bounds up to the replacement of by
in both cases. For the Mat\'ern- kernel, we give analogous
bounds of the form and
, and discuss the resulting
gaps to the existing upper bounds.Comment: Appearing in COLT 2017. This version corrects a few minor mistakes in
Table I, which summarizes the new and existing regret bound
- …