The study of collaborative multi-agent bandits has attracted significant
attention recently. In light of this, we initiate the study of a new
collaborative setting, consisting of N agents such that each agent is
learning one of M stochastic multi-armed bandits to minimize their group
cumulative regret. We develop decentralized algorithms which facilitate
collaboration between the agents under two scenarios. We characterize the
performance of these algorithms by deriving the per agent cumulative regret and
group regret upper bounds. We also prove lower bounds for the group regret in
this setting, which demonstrates the near-optimal behavior of the proposed
algorithms.Comment: To appear in the proceedings of ICML 202