3,500 research outputs found
Learning with Opponent-Learning Awareness
Multi-agent settings are quickly gathering importance in machine learning.
This includes a plethora of recent work on deep multi-agent reinforcement
learning, but also can be extended to hierarchical RL, generative adversarial
networks and decentralised optimisation. In all these settings the presence of
multiple learning agents renders the training problem non-stationary and often
leads to unstable training or undesired final results. We present Learning with
Opponent-Learning Awareness (LOLA), a method in which each agent shapes the
anticipated learning of the other agents in the environment. The LOLA learning
rule includes a term that accounts for the impact of one agent's policy on the
anticipated parameter update of the other agents. Results show that the
encounter of two LOLA agents leads to the emergence of tit-for-tat and
therefore cooperation in the iterated prisoners' dilemma, while independent
learning does not. In this domain, LOLA also receives higher payouts compared
to a naive learner, and is robust against exploitation by higher order
gradient-based methods. Applied to repeated matching pennies, LOLA agents
converge to the Nash equilibrium. In a round robin tournament we show that LOLA
agents successfully shape the learning of a range of multi-agent learning
algorithms from literature, resulting in the highest average returns on the
IPD. We also show that the LOLA update rule can be efficiently calculated using
an extension of the policy gradient estimator, making the method suitable for
model-free RL. The method thus scales to large parameter and input spaces and
nonlinear function approximators. We apply LOLA to a grid world task with an
embedded social dilemma using recurrent policies and opponent modelling. By
explicitly considering the learning of the other agent, LOLA agents learn to
cooperate out of self-interest. The code is at github.com/alshedivat/lola
Evolutionary instability of Zero Determinant strategies demonstrates that winning isn't everything
Zero Determinant (ZD) strategies are a new class of probabilistic and
conditional strategies that are able to unilaterally set the expected payoff of
an opponent in iterated plays of the Prisoner's Dilemma irrespective of the
opponent's strategy, or else to set the ratio between a ZD player's and their
opponent's expected payoff. Here we show that while ZD strategies are weakly
dominant, they are not evolutionarily stable and will instead evolve into less
coercive strategies. We show that ZD strategies with an informational advantage
over other players that allows them to recognize other ZD strategies can be
evolutionarily stable (and able to exploit other players). However, such an
advantage is bound to be short-lived as opposing strategies evolve to
counteract the recognition.Comment: 14 pages, 4 figures. Change in title (again!) to comply with Nature
Communications requirements. To appear in Nature Communication
- …