Location of Repository

Empirically Evaluating Multiagent Reinforcement Learning Algorithms in Repeated Games

By Asher G. Lipson


This dissertation presents a platform for running experiments on multiagent reinforcement learning algorithms and an empirical evaluation that was conducted on the platform. The setting under consideration is game theoretic in which a single normal form game is repeatedly played. There has been a large body of work focusing on introducing new algorithms to achieve certain goals such as guaranteeing values in a game, converging to a Nash equilibrium or minimizing total regret. Currently, we have an understanding of how some of these algorithms work in limited settings, but lack a broader understanding of which algorithms perform well against each other and how they perform on a larger variety of games. We describe our development of a platform that allows large scale tests to be run, where multiple algorithms are played against one another on a variety of games. The platform has a set of builtin metrics that can be used to measure the performance of an algorithm, including convergence to a Nash equilibrium, regret, reward and number of wins. Visualising the results of the test can be automatically achieved through the platform, with all interaction taking place through graphical user interfaces. We also present the results of an empirical test that to our knowledge includes the largest combinatio

Year: 2005
OAI identifier: oai:CiteSeerX.psu:
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://citeseerx.ist.psu.edu/v... (external link)
  • http://www.cs.ubc.ca/grads/res... (external link)
  • Suggested articles

    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.