Cold play: Learning across bimatrix games

Abstract

We study one-shot play in the set of all bimatrix games by a large population of agents. The agents never see the same game twice, but they can learn ‘across games’ by developing solution concepts that tell them how to play new games. Each agent’s individual solution concept is represented by a computer program, and natural selection is applied to derive stochastically stable solution concepts. Our aim is to develop a theory predicting how experienced agents would play in one-shot games

    Similar works