1,113 research outputs found

    Pure Subgame-Perfect Equilibria in Free Transition Games

    Get PDF
    We consider a class of stochastic games, where each state is identified with a player. At any moment during play, one of the players is called active. The active player can terminate the game, or he can announce any player, who then becomes the active player. There is a non-negative payoff for each player upon termination of the game, which depends only on the player who decided to terminate. We give a combinatorial proof of the existence of subgame-perfect equilibria in pure strategies for the games in our class.mathematical economics;

    An Approximate Subgame-Perfect Equilibrium Computation Technique for Repeated Games

    Full text link
    This paper presents a technique for approximating, up to any precision, the set of subgame-perfect equilibria (SPE) in discounted repeated games. The process starts with a single hypercube approximation of the set of SPE. Then the initial hypercube is gradually partitioned on to a set of smaller adjacent hypercubes, while those hypercubes that cannot contain any point belonging to the set of SPE are simultaneously withdrawn. Whether a given hypercube can contain an equilibrium point is verified by an appropriate mathematical program. Three different formulations of the algorithm for both approximately computing the set of SPE payoffs and extracting players' strategies are then proposed: the first two that do not assume the presence of an external coordination between players, and the third one that assumes a certain level of coordination during game play for convexifying the set of continuation payoffs after any repeated game history. A special attention is paid to the question of extracting players' strategies and their representability in form of finite automata, an important feature for artificial agent systems.Comment: 26 pages, 13 figures, 1 tabl

    Folk Theorems with Bounded Recall under (Almost) Perfect Monitoring, Third Version

    Get PDF
    We prove the perfect-monitoring folk theorem continues to hold when attention is restricted to strategies with bounded recall and the equilibrium is essentially required to be strict. As a consequence, the perfect monitoring folk theorem is shown to be behaviorally robust under almost-perfect almost-public monitoring. That is, the same specification of behavior continues to be an equilibrium when the monitoring is perturbed from perfect to highly-correlated private.Repeated games, bounded recall strategies, folk theorem,imperfect monitoring

    "Repeated Games, Entry in The New Palgrave Dictionary of Economics, 2nd Edition"

    Get PDF
    This entry shows why self-interested agents manage to cooperate in a long-term relationship. When agents interact only once, they often have an incentive to deviate from cooperation. In a repeated interaction, however, any mutually beneficial outcome can be sustained in an equilibrium. This fact, known as the folk theorem, is explained under various information structures. This entry also compares repeated games with other means to achieve efficiency and briefly discuss the scope for potential applications.

    Jamming Games in the MIMO Wiretap Channel With an Active Eavesdropper

    Full text link
    This paper investigates reliable and covert transmission strategies in a multiple-input multiple-output (MIMO) wiretap channel with a transmitter, receiver and an adversarial wiretapper, each equipped with multiple antennas. In a departure from existing work, the wiretapper possesses a novel capability to act either as a passive eavesdropper or as an active jammer, under a half-duplex constraint. The transmitter therefore faces a choice between allocating all of its power for data, or broadcasting artificial interference along with the information signal in an attempt to jam the eavesdropper (assuming its instantaneous channel state is unknown). To examine the resulting trade-offs for the legitimate transmitter and the adversary, we model their interactions as a two-person zero-sum game with the ergodic MIMO secrecy rate as the payoff function. We first examine conditions for the existence of pure-strategy Nash equilibria (NE) and the structure of mixed-strategy NE for the strategic form of the game.We then derive equilibrium strategies for the extensive form of the game where players move sequentially under scenarios of perfect and imperfect information. Finally, numerical simulations are presented to examine the equilibrium outcomes of the various scenarios considered.Comment: 27 pages, 8 figures. To appear, IEEE Transactions on Signal Processin

    Climate Change and Game Theory

    Get PDF
    This survey paper examines the problem of achieving global cooperation to reduce greenhouse gas emissions. Contributions to this problem are reviewed from non-cooperative game theory, cooperative game theory, and implementation theory. Solutions to games where players have a continuous choice about how much to pollute, games where players make decisions about treaty participation, and games where players make decisions about treaty ratification, are examined. The implications of linking cooperation on climate change with cooperation on other issues, such as trade, is examined. Cooperative and non-cooperative approaches to coalition formation are investigated in order to examine the behaviour of coalitions cooperating on climate change. One way to achieve cooperation is to design a game, known as a mechanism, whose equilibrium corresponds to an optimal outcome. This paper examines some mechanisms that are based on conditional commitments, and could lead to substantial cooperation.Climate change negotiations, game theory, implementation theory, coalition formation, subgame perfect equilibrium, Environmental Economics and Policy,

    Leveraging repeated games for solving complex multiagent decision problems

    Get PDF
    Prendre de bonnes décisions dans des environnements multiagents est une tâche difficile dans la mesure où la présence de plusieurs décideurs implique des conflits d'intérêts, un manque de coordination, et une multiplicité de décisions possibles. Si de plus, les décideurs interagissent successivement à travers le temps, ils doivent non seulement décider ce qu'il faut faire actuellement, mais aussi comment leurs décisions actuelles peuvent affecter le comportement des autres dans le futur. La théorie des jeux est un outil mathématique qui vise à modéliser ce type d'interactions via des jeux stratégiques à plusieurs joueurs. Des lors, les problèmes de décision multiagent sont souvent étudiés en utilisant la théorie des jeux. Dans ce contexte, et si on se restreint aux jeux dynamiques, les problèmes de décision multiagent complexes peuvent être approchés de façon algorithmique. La contribution de cette thèse est triple. Premièrement, elle contribue à un cadre algorithmique pour la planification distribuée dans les jeux dynamiques non-coopératifs. La multiplicité des plans possibles est à l'origine de graves complications pour toute approche de planification. Nous proposons une nouvelle approche basée sur la notion d'apprentissage dans les jeux répétés. Une telle approche permet de surmonter lesdites complications par le biais de la communication entre les joueurs. Nous proposons ensuite un algorithme d'apprentissage pour les jeux répétés en ``self-play''. Notre algorithme permet aux joueurs de converger, dans les jeux répétés initialement inconnus, vers un comportement conjoint optimal dans un certain sens bien défini, et ce, sans aucune communication entre les joueurs. Finalement, nous proposons une famille d'algorithmes de résolution approximative des jeux dynamiques et d'extraction des stratégies des joueurs. Dans ce contexte, nous proposons tout d'abord une méthode pour calculer un sous-ensemble non vide des équilibres approximatifs parfaits en sous-jeu dans les jeux répétés. Nous montrons ensuite comment nous pouvons étendre cette méthode pour approximer tous les équilibres parfaits en sous-jeu dans les jeux répétés, et aussi résoudre des jeux dynamiques plus complexes.Making good decisions in multiagent environments is a hard problem in the sense that the presence of several decision makers implies conflicts of interests, a lack of coordination, and a multiplicity of possible decisions. If, then, the same decision makers interact continuously through time, they have to decide not only what to do in the present, but also how their present decisions may affect the behavior of the others in the future. Game theory is a mathematical tool that aims to model such interactions as strategic games of multiple players. Therefore, multiagent decision problems are often studied using game theory. In this context, and being restricted to dynamic games, complex multiagent decision problems can be algorithmically approached. The contribution of this thesis is three-fold. First, this thesis contributes an algorithmic framework for distributed planning in non-cooperative dynamic games. The multiplicity of possible plans is a matter of serious complications for any planning approach. We propose a novel approach based on the concept of learning in repeated games. Our approach permits overcoming the aforementioned complications by means of communication between players. We then propose a learning algorithm for repeated game self-play. Our algorithm allows players to converge, in an initially unknown repeated game, to a joint behavior optimal in a certain, well-defined sense, without communication between players. Finally, we propose a family of algorithms for approximately solving dynamic games, and for extracting equilibrium strategy profiles. In this context, we first propose a method to compute a nonempty subset of approximate subgame-perfect equilibria in repeated games. We then demonstrate how to extend this method for approximating all subgame-perfect equilibria in repeated games, and also for solving more complex dynamic games
    corecore