2,427 research outputs found

    Bounded regret in stochastic multi-armed bandits

    Full text link
    We study the stochastic multi-armed bandit problem when one knows the value μ(⋆)\mu^{(\star)} of an optimal arm, as a well as a positive lower bound on the smallest positive gap Δ\Delta. We propose a new randomized policy that attains a regret {\em uniformly bounded over time} in this setting. We also prove several lower bounds, which show in particular that bounded regret is not possible if one only knows Δ\Delta, and bounded regret of order 1/Δ1/\Delta is not possible if one only knows $\mu^{(\star)}

    Unimodal Bandits: Regret Lower Bounds and Optimal Algorithms

    Get PDF
    We consider stochastic multi-armed bandits where the expected reward is a unimodal function over partially ordered arms. This important class of problems has been recently investigated in (Cope 2009, Yu 2011). The set of arms is either discrete, in which case arms correspond to the vertices of a finite graph whose structure represents similarity in rewards, or continuous, in which case arms belong to a bounded interval. For discrete unimodal bandits, we derive asymptotic lower bounds for the regret achieved under any algorithm, and propose OSUB, an algorithm whose regret matches this lower bound. Our algorithm optimally exploits the unimodal structure of the problem, and surprisingly, its asymptotic regret does not depend on the number of arms. We also provide a regret upper bound for OSUB in non-stationary environments where the expected rewards smoothly evolve over time. The analytical results are supported by numerical experiments showing that OSUB performs significantly better than the state-of-the-art algorithms. For continuous sets of arms, we provide a brief discussion. We show that combining an appropriate discretization of the set of arms with the UCB algorithm yields an order-optimal regret, and in practice, outperforms recently proposed algorithms designed to exploit the unimodal structure.Comment: ICML 2014 (technical report). arXiv admin note: text overlap with arXiv:1307.730
    • …
    corecore