65 research outputs found
A Practical Algorithm for Multiplayer Bandits when Arm Means Vary Among Players
We study a multiplayer stochastic multi-armed bandit problem in which players
cannot communicate, and if two or more players pull the same arm, a collision
occurs and the involved players receive zero reward. We consider the
challenging heterogeneous setting, in which different arms may have different
means for different players, and propose a new and efficient algorithm that
combines the idea of leveraging forced collisions for implicit communication
and that of performing matching eliminations. We present a finite-time analysis
of our algorithm, giving the first sublinear minimax regret bound for this
problem, and prove that if the optimal assignment of players to arms is unique,
our algorithm attains the optimal regret, solving an open question
raised at NeurIPS 2018.Comment: AISTATS202
Optimal Cooperative Multiplayer Learning Bandits with Noisy Rewards and No Communication
We consider a cooperative multiplayer bandit learning problem where the
players are only allowed to agree on a strategy beforehand, but cannot
communicate during the learning process. In this problem, each player
simultaneously selects an action. Based on the actions selected by all players,
the team of players receives a reward. The actions of all the players are
commonly observed. However, each player receives a noisy version of the reward
which cannot be shared with other players. Since players receive potentially
different rewards, there is an asymmetry in the information used to select
their actions. In this paper, we provide an algorithm based on upper and lower
confidence bounds that the players can use to select their optimal actions
despite the asymmetry in the reward information. We show that this algorithm
can achieve logarithmic (gap-dependent)
regret as well as (gap-independent) regret. This is
asymptotically optimal in . We also show that it performs empirically better
than the current state of the art algorithm for this environment
A survey on multi-player bandits
works released after June 2022 are not considered in this surveyDue mostly to its application to cognitive radio networks, multiplayer bandits gained a lot of interest in the last decade. A considerable progress has been made on its theoretical aspect. However, the current algorithms are far from applicable and many obstacles remain between these theoretical results and a possible implementation of multiplayer bandits algorithms in real cognitive radio networks. This survey contextualizes and organizes the rich multiplayer bandits literature. In light of the existing works, some clear directions for future research appear. We believe that a further study of these different directions might lead to theoretical algorithms adapted to real-world situations
- …