Skip to main content
Article thumbnail
Location of Repository

Neural networks as a learning paradigm for general normal form games

By Leonidas Spiliopoulos

Abstract

This paper addresses how neural networks learn to play one-shot normal form games through experience in an environment of randomly generated game payoffs and randomly selected opponents. This agent based computational approach allows the modeling of learning all strategic types of normal form games, irregardless of the number of pure and mixed strategy Nash equilibria that they exhibit. This is a more realistic model of learning than the oft used models in the game theory learning literature which are usually restricted either to repeated games against the same opponent (or games with different payoffs but belonging to the same strategic class). The neural network agents were found to approximate human behavior in experimental one-shot games very well as the Spearman correlation coefficients between their behavior and that of human subjects ranged from 0.49 to 0.8857 across numerous experimental studies. Also, they exhibited the endogenous emergence of heuristics that have been found effective in describing human behavior in one-shot games. The notion of bounded rationality is explored by varying the topologies of the neural networks, which indirectly affects their ability to act as universal approximators of any function. The neural networks' behavior was assessed across various dimensions such as convergence to Nash equilibria, equilibrium selection and adherence to principles of iterated dominance.

Topics: C45 - Neural Networks and Related Topics, C70 - General, C73 - Stochastic and Dynamic Games; Evolutionary Games; Repeated Games
Year: 2009
DOI identifier: 10.2139/ssrn.1447968
OAI identifier: oai:mpra.ub.uni-muenchen.de:16765

Suggested articles


To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.