7 research outputs found

    Growing Strategy Sets in Repeated Games

    Get PDF
    A (pure) strategy in a repeated game is a mapping from histories, or, more generally, signals, to actions. We view the implementation of such a strategy as a computational procedure and attempt to capture in a formal model the following intuition: as the game proceeds, the amount of information (history) to be taken into account becomes large and the \quo{computational burden} becomes increasingly heavy. The number of strategies in repeated games grows double-exponentially with the number of repetitions. This is due to the fact that the number of histories grows exponentially with the number of repetitions and also that we count strategies that map histories into actions in all possible ways. Any model that captures the intuition mentioned above would impose some restriction on the way the set of strategies available at each stage expands. We point out that existing measures of complexity of a strategy, such as the number of states of an automaton that represents the strategy needs to be refined in order to capture the notion of growing strategy space. Thus we propose a general model of repeated game strategies which are implementable by automata with growing number of states with restrictions on the rate of growth. With such model, we revisit some of the past results concerning the repeated games with finite automata whose number of states are bounded by a constant, e.g., Ben-Porath (1993) in the case of two-person infinitely repeated games. In addition, we study an undiscounted infinitely repeated two-person zero-sum game in which the strategy set of player 1, the maximizer, expands \quo{slowly} while there is no restriction on player 2's strategy space. Our main result is that, if the number of strategies available to player 1 at stage nn grows subexponentially with nn, then player 2 has a pure optimal strategy and the value of the game is the maxmin value of the stage game, the lowest payoff that player 1 can guarantee in one-shot game. This result is independent of whether strategies can be implemented by automaton or not. This is a strong result in that an optimal strategy in an infinitely repeated game has, by definition, a property that, for every cc, it holds player 1's payoff to at most the value plus cc after some stageRepeated Games, Complexity, Entropy

    Playing off-line games with bounded rationality

    Get PDF
    We study a two-person zero-sum game where players simultaneously choose sequences of actions, and the overall payo is the average of a one-shot payo over the joint sequence. We consider the maxmin value of the game played in pure strategies by boundedly rational players and model bounded rationality by introducing complexity limitations. First we dene the complexity of a sequence by its smallest period (a non-periodic sequence being of innite complexity) and study the maxmin of the game where player 1 is restricted to strategies with complexity at most n and player 2 is restricted to strategies with complexity at most m. We study the asymptotics of this value and a complete characterization in the matching pennies case. We extend the analysis of matching pennies to strategies with bounded recall.We study a two-person zero-sum game where players simultaneously choose sequences of actions, and the overall payo is the average of a one-shot payo over the joint sequence. We consider the maxmin value of the game played in pure strategies by boundedly rational players and model bounded rationality by introducing complexity limitations. First we dene the complexity of a sequence by its smallest period (a non-periodic sequence being of innite complexity) and study the maxmin of the game where player 1 is restricted to strategies with complexity at most n and player 2 is restricted to strategies with complexity at most m. We study the asymptotics of this value and a complete characterization in the matching pennies case. We extend the analysis of matching pennies to strategies with bounded recall.Refereed Working Papers / of international relevanc

    Growth of strategy sets, entropy, and nonstationary bounded recall

    Get PDF
    Abstract The paper initiates the study of long term interactions where players' bounded rationality varies over time. Time dependent bounded rationality, for player i, is reflected in part in the number ψ i (t) of distinct strategies available to him in the first t-stages. We examine how the growth rate of ψ i (t) affects equilibrium outcomes of repeated games. An upper bound on the individually rational payoff is derived for a class of two-player repeated games, and the derived bound is shown to be tight. As a special case we study the repeated games with nonstationary bounded recall and show that, a player can guarantee the minimax payoff of the stage game, even against a player with full recall, by remembering a vanishing fraction of the past. A version of the folk theorem is provided for this class of games
    corecore