32 research outputs found
A survey on multi-player bandits
works released after June 2022 are not considered in this surveyDue mostly to its application to cognitive radio networks, multiplayer bandits gained a lot of interest in the last decade. A considerable progress has been made on its theoretical aspect. However, the current algorithms are far from applicable and many obstacles remain between these theoretical results and a possible implementation of multiplayer bandits algorithms in real cognitive radio networks. This survey contextualizes and organizes the rich multiplayer bandits literature. In light of the existing works, some clear directions for future research appear. We believe that a further study of these different directions might lead to theoretical algorithms adapted to real-world situations
Competing for Shareable Arms in Multi-Player Multi-Armed Bandits
Competitions for shareable and limited resources have long been studied with
strategic agents. In reality, agents often have to learn and maximize the
rewards of the resources at the same time. To design an individualized
competing policy, we model the competition between agents in a novel
multi-player multi-armed bandit (MPMAB) setting where players are selfish and
aim to maximize their own rewards. In addition, when several players pull the
same arm, we assume that these players averagely share the arms' rewards by
expectation. Under this setting, we first analyze the Nash equilibrium when
arms' rewards are known. Subsequently, we propose a novel SelfishMPMAB with
Averaging Allocation (SMAA) approach based on the equilibrium. We theoretically
demonstrate that SMAA could achieve a good regret guarantee for each player
when all players follow the algorithm. Additionally, we establish that no
single selfish player can significantly increase their rewards through
deviation, nor can they detrimentally affect other players' rewards without
incurring substantial losses for themselves. We finally validate the
effectiveness of the method in extensive synthetic experiments.Comment: ICML 202