2 research outputs found
Multiplayer Bandit Learning, from Competition to Cooperation
The stochastic multi-armed bandit model captures the tradeoff between
exploration and exploitation. We study the effects of competition and
cooperation on this tradeoff. Suppose there are arms and two players, Alice
and Bob. In every round, each player pulls an arm, receives the resulting
reward, and observes the choice of the other player but not their reward.
Alice's utility is (and similarly for Bob), where
is Alice's total reward and is a cooperation
parameter. At the players are competing in a zero-sum game, at
, they are fully cooperating, and at , they are
neutral: each player's utility is their own reward. The model is related to the
economics literature on strategic experimentation, where usually players
observe each other's rewards.
With discount factor , the Gittins index reduces the one-player
problem to the comparison between a risky arm, with a prior , and a
predictable arm, with success probability . The value of where the
player is indifferent between the arms is the Gittins index , where is the mean of the risky arm.
We show that competing players explore less than a single player: there is
so that for all , the players stay at the predictable
arm. However, the players are not myopic: they still explore for some .
On the other hand, cooperating players explore more than a single player. We
also show that neutral players learn from each other, receiving strictly higher
total rewards than they would playing alone, for all , where
is the threshold from the competing case.
Finally, we show that competing and neutral players eventually settle on the
same arm in every Nash equilibrium, while this can fail for cooperating
players.Comment: 41 pages, 5 figure