Optimal Cooperative Multiplayer Learning Bandits with Noisy Rewards and No Communication

Abstract

We consider a cooperative multiplayer bandit learning problem where the players are only allowed to agree on a strategy beforehand, but cannot communicate during the learning process. In this problem, each player simultaneously selects an action. Based on the actions selected by all players, the team of players receives a reward. The actions of all the players are commonly observed. However, each player receives a noisy version of the reward which cannot be shared with other players. Since players receive potentially different rewards, there is an asymmetry in the information used to select their actions. In this paper, we provide an algorithm based on upper and lower confidence bounds that the players can use to select their optimal actions despite the asymmetry in the reward information. We show that this algorithm can achieve logarithmic O(log⁑TΞ”a)O(\frac{\log T}{\Delta_{\bm{a}}}) (gap-dependent) regret as well as O(Tlog⁑T)O(\sqrt{T\log T}) (gap-independent) regret. This is asymptotically optimal in TT. We also show that it performs empirically better than the current state of the art algorithm for this environment

    Similar works

    Full text

    thumbnail-image

    Available Versions