We present a neural network methodology for learning game-playing rules in general. Existing research suggests learning to find a Nash equilibrium in a new game is too difficult a task for a neural network, but says little about what it will do instead. We observe that a neural network trained to find Nash equilibria in a known subset of games, will use self-taught rules developed endogenously when facing new games. These rules are close to payoff dominance and its best response. Our findings are consistent with existing experimental results, both in terms of subject's methodology and success rates
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.