187 research outputs found

    Game of Thrones: Fully Distributed Learning for Multi-Player Bandits

    Full text link
    We consider a multi-armed bandit game where N players compete for M arms for T turns. Each player has different expected rewards for the arms, and the instantaneous rewards are independent and identically distributed or Markovian. When two or more players choose the same arm, they all receive zero reward. Performance is measured using the expected sum of regrets, compared to optimal assignment of arms to players. We assume that each player only knows her actions and the reward she received each turn. Players cannot observe the actions of other players, and no communication between players is possible. We present a distributed algorithm and prove that it achieves an expected sum of regrets of near-O\left(\log T\right). This is the first algorithm to achieve a near order optimal regret in this fully distributed scenario. All other works have assumed that either all players have the same vector of expected rewards or that communication between players is possible.Comment: A preliminary version was accepted to NIPS 2018. This extended paper, currently under review (submitted in September 2019), improves the regret bound to near-log(T), generalizes to unbounded and Markovian rewards and has a much better convergence rat

    A Practical Algorithm for Multiplayer Bandits when Arm Means Vary Among Players

    Get PDF
    We study a multiplayer stochastic multi-armed bandit problem in which players cannot communicate, and if two or more players pull the same arm, a collision occurs and the involved players receive zero reward. We consider the challenging heterogeneous setting, in which different arms may have different means for different players, and propose a new and efficient algorithm that combines the idea of leveraging forced collisions for implicit communication and that of performing matching eliminations. We present a finite-time analysis of our algorithm, giving the first sublinear minimax regret bound for this problem, and prove that if the optimal assignment of players to arms is unique, our algorithm attains the optimal O(ln(T))O(\ln(T)) regret, solving an open question raised at NeurIPS 2018.Comment: AISTATS202
    corecore