We consider a cooperative multiplayer bandit learning problem where the
players are only allowed to agree on a strategy beforehand, but cannot
communicate during the learning process. In this problem, each player
simultaneously selects an action. Based on the actions selected by all players,
the team of players receives a reward. The actions of all the players are
commonly observed. However, each player receives a noisy version of the reward
which cannot be shared with other players. Since players receive potentially
different rewards, there is an asymmetry in the information used to select
their actions. In this paper, we provide an algorithm based on upper and lower
confidence bounds that the players can use to select their optimal actions
despite the asymmetry in the reward information. We show that this algorithm
can achieve logarithmic O(ΞaβlogTβ) (gap-dependent)
regret as well as O(TlogTβ) (gap-independent) regret. This is
asymptotically optimal in T. We also show that it performs empirically better
than the current state of the art algorithm for this environment