65 research outputs found

    A Practical Algorithm for Multiplayer Bandits when Arm Means Vary Among Players

    Get PDF
    We study a multiplayer stochastic multi-armed bandit problem in which players cannot communicate, and if two or more players pull the same arm, a collision occurs and the involved players receive zero reward. We consider the challenging heterogeneous setting, in which different arms may have different means for different players, and propose a new and efficient algorithm that combines the idea of leveraging forced collisions for implicit communication and that of performing matching eliminations. We present a finite-time analysis of our algorithm, giving the first sublinear minimax regret bound for this problem, and prove that if the optimal assignment of players to arms is unique, our algorithm attains the optimal O(ln(T))O(\ln(T)) regret, solving an open question raised at NeurIPS 2018.Comment: AISTATS202

    Optimal Cooperative Multiplayer Learning Bandits with Noisy Rewards and No Communication

    Full text link
    We consider a cooperative multiplayer bandit learning problem where the players are only allowed to agree on a strategy beforehand, but cannot communicate during the learning process. In this problem, each player simultaneously selects an action. Based on the actions selected by all players, the team of players receives a reward. The actions of all the players are commonly observed. However, each player receives a noisy version of the reward which cannot be shared with other players. Since players receive potentially different rewards, there is an asymmetry in the information used to select their actions. In this paper, we provide an algorithm based on upper and lower confidence bounds that the players can use to select their optimal actions despite the asymmetry in the reward information. We show that this algorithm can achieve logarithmic O(logTΔa)O(\frac{\log T}{\Delta_{\bm{a}}}) (gap-dependent) regret as well as O(TlogT)O(\sqrt{T\log T}) (gap-independent) regret. This is asymptotically optimal in TT. We also show that it performs empirically better than the current state of the art algorithm for this environment

    A survey on multi-player bandits

    Get PDF
    works released after June 2022 are not considered in this surveyDue mostly to its application to cognitive radio networks, multiplayer bandits gained a lot of interest in the last decade. A considerable progress has been made on its theoretical aspect. However, the current algorithms are far from applicable and many obstacles remain between these theoretical results and a possible implementation of multiplayer bandits algorithms in real cognitive radio networks. This survey contextualizes and organizes the rich multiplayer bandits literature. In light of the existing works, some clear directions for future research appear. We believe that a further study of these different directions might lead to theoretical algorithms adapted to real-world situations
    corecore