106 research outputs found

    An Information-Theoretic Analysis of Thompson Sampling

    Full text link
    We provide an information-theoretic analysis of Thompson sampling that applies across a broad range of online optimization problems in which a decision-maker must learn from partial feedback. This analysis inherits the simplicity and elegance of information theory and leads to regret bounds that scale with the entropy of the optimal-action distribution. This strengthens preexisting results and yields new insight into how information improves performance

    First-Order Regret Analysis of Thompson Sampling

    Full text link
    We address online combinatorial optimization when the player has a prior over the adversary's sequence of losses. In this framework, Russo and Van Roy proposed an information-theoretic analysis of Thompson Sampling based on the {\em information ratio}, resulting in optimal worst-case regret bounds. In this paper we introduce three novel ideas to this line of work. First we propose a new quantity, the scale-sensitive information ratio, which allows us to obtain more refined first-order regret bounds (i.e., bounds of the form L\sqrt{L^*} where LL^* is the loss of the best combinatorial action). Second we replace the entropy over combinatorial actions by a coordinate entropy, which allows us to obtain the first optimal worst-case bound for Thompson Sampling in the combinatorial setting. Finally, we introduce a novel link between Bayesian agents and frequentist confidence intervals. Combining these ideas we show that the classical multi-armed bandit first-order regret bound O~(dL)\tilde{O}(\sqrt{d L^*}) still holds true in the more challenging and more general semi-bandit scenario. This latter result improves the previous state of the art bound O~((d+m3)L)\tilde{O}(\sqrt{(d+m^3)L^*}) by Lykouris, Sridharan and Tardos.Comment: 27 page

    Lower Bounds on Regret for Noisy Gaussian Process Bandit Optimization

    Get PDF
    In this paper, we consider the problem of sequentially optimizing a black-box function ff based on noisy samples and bandit feedback. We assume that ff is smooth in the sense of having a bounded norm in some reproducing kernel Hilbert space (RKHS), yielding a commonly-considered non-Bayesian form of Gaussian process bandit optimization. We provide algorithm-independent lower bounds on the simple regret, measuring the suboptimality of a single point reported after TT rounds, and on the cumulative regret, measuring the sum of regrets over the TT chosen points. For the isotropic squared-exponential kernel in dd dimensions, we find that an average simple regret of ϵ\epsilon requires T=Ω(1ϵ2(log1ϵ)d/2)T = \Omega\big(\frac{1}{\epsilon^2} (\log\frac{1}{\epsilon})^{d/2}\big), and the average cumulative regret is at least Ω(T(logT)d/2)\Omega\big( \sqrt{T(\log T)^{d/2}} \big), thus matching existing upper bounds up to the replacement of d/2d/2 by 2d+O(1)2d+O(1) in both cases. For the Mat\'ern-ν\nu kernel, we give analogous bounds of the form Ω((1ϵ)2+d/ν)\Omega\big( (\frac{1}{\epsilon})^{2+d/\nu}\big) and Ω(Tν+d2ν+d)\Omega\big( T^{\frac{\nu + d}{2\nu + d}} \big), and discuss the resulting gaps to the existing upper bounds.Comment: Appearing in COLT 2017. This version corrects a few minor mistakes in Table I, which summarizes the new and existing regret bound
    corecore