6 research outputs found

    Growing Strategy Sets in Repeated Games

    Get PDF
    A (pure) strategy in a repeated game is a mapping from histories, or, more generally, signals, to actions. We view the implementation of such a strategy as a computational procedure and attempt to capture in a formal model the following intuition: as the game proceeds, the amount of information (history) to be taken into account becomes large and the \quo{computational burden} becomes increasingly heavy. The number of strategies in repeated games grows double-exponentially with the number of repetitions. This is due to the fact that the number of histories grows exponentially with the number of repetitions and also that we count strategies that map histories into actions in all possible ways. Any model that captures the intuition mentioned above would impose some restriction on the way the set of strategies available at each stage expands. We point out that existing measures of complexity of a strategy, such as the number of states of an automaton that represents the strategy needs to be refined in order to capture the notion of growing strategy space. Thus we propose a general model of repeated game strategies which are implementable by automata with growing number of states with restrictions on the rate of growth. With such model, we revisit some of the past results concerning the repeated games with finite automata whose number of states are bounded by a constant, e.g., Ben-Porath (1993) in the case of two-person infinitely repeated games. In addition, we study an undiscounted infinitely repeated two-person zero-sum game in which the strategy set of player 1, the maximizer, expands \quo{slowly} while there is no restriction on player 2's strategy space. Our main result is that, if the number of strategies available to player 1 at stage nn grows subexponentially with nn, then player 2 has a pure optimal strategy and the value of the game is the maxmin value of the stage game, the lowest payoff that player 1 can guarantee in one-shot game. This result is independent of whether strategies can be implemented by automaton or not. This is a strong result in that an optimal strategy in an infinitely repeated game has, by definition, a property that, for every cc, it holds player 1's payoff to at most the value plus cc after some stageRepeated Games, Complexity, Entropy

    PERCEPTRON VERSUS AUTOMATON∗

    Get PDF
    We study the finitely repeated prisoner’s dilemma in which the players are restricted to choosing strategies which are implementable by a machine with a bound on its complexity. One player must use a finite automaton while the other player must use a finite perceptron. Some examples illustrate that the sets of strategies which are induced by these two types of machines are different and not ordered by set inclusion. The main result establishes that a cooperation in almost all stages of the game is an equilibrium outcome if the complexity of the machines players may use is limited enough. This result persists when there are more than T states in the player’s automaton, where T is the duration of the repeated game. We further consider the finitely repeated prisoner’s dilemma in which the two players are restricted to choosing strategies which are implementable by perceptrons and prove that players can cooperate in most of the stages provided that the complexity of their perceptrons is sufficiently reduced.

    PSEUDORANDOM PROCESSES: ENTROPY AND AUTOMATA

    Get PDF
    This paper studies implementation of cooperative payoffs in finitely repeated games when players implement their strategies by finite automata of big sizes. Specifically, we analyze how much we have to depart from fully rational behavior to achieve the Folk Theorem payoffs, i.e., which are the maximum bounds on automata complexity which yield cooperative behavior in long but not infinite interactions. To this end we present a new approach to the implementation of the mixed strategy equilibrium paths leading to cooperation. The novelty is to offer a new construction of the set of the pure strategies which belong to the mixed strategy equilibrium. Thus, we consider the subset of strategies which is characterized by both the complexity of the finite automata and the entropy associated to the underlying coordination process. The equilibrium play consists of both a communication phase and the play of a cycle which depends on the chosen message. The communication set is designed by tools of Information Theory. Moreover, the characterization of this set is given by the complexity of the weaker player that implements the equilibrium play. We offer a domain of definition of the smallest automaton which includes previous domains in the literature.Complexity; Cooperation; Entropy; Automata; Repeated Games.

    A Complete Characterization of Infinitely Repeated Two-Player Games having Computable Strategies with no Computable Best Response under Limit-of-Means Payoff

    Full text link
    It is well-known that for infinitely repeated games, there are computable strategies that have best responses, but no computable best responses. These results were originally proved for either specific games (e.g., Prisoner's dilemma), or for classes of games satisfying certain conditions not known to be both necessary and sufficient. We derive a complete characterization in the form of simple necessary and sufficient conditions for the existence of a computable strategy without a computable best response under limit-of-means payoff. We further refine the characterization by requiring the strategy profiles to be Nash equilibria or subgame-perfect equilibria, and we show how the characterizations entail that it is efficiently decidable whether an infinitely repeated game has a computable strategy without a computable best response

    Growth of strategy sets, entropy, and nonstationary bounded recall

    Get PDF
    Abstract The paper initiates the study of long term interactions where players' bounded rationality varies over time. Time dependent bounded rationality, for player i, is reflected in part in the number ψ i (t) of distinct strategies available to him in the first t-stages. We examine how the growth rate of ψ i (t) affects equilibrium outcomes of repeated games. An upper bound on the individually rational payoff is derived for a class of two-player repeated games, and the derived bound is shown to be tight. As a special case we study the repeated games with nonstationary bounded recall and show that, a player can guarantee the minimax payoff of the stage game, even against a player with full recall, by remembering a vanishing fraction of the past. A version of the folk theorem is provided for this class of games

    Two-person repeated games with finite automata

    No full text
    We study two-person repeated games in which a player with a restricted set of strategies plays against an unrestricted player. An exogenously given bound on the complexity of strategies, which is measured by the size of the smallest automata that implement them, gives rise to a restriction on strategies available to a player. We examine the asymptotic behavior of the set of equilibrium payoffs as the bound on the strategic complexity of the restricted player tends to infinity, but sufficiently slowly. Results from the study of zero sum case provide the individually rational payoff levels.repeated games, finite automata
    corecore