3,792 research outputs found

    Designing Network Protocols for Good Equilibria

    Get PDF
    Designing and deploying a network protocol determines the rules by which end users interact with each other and with the network. We consider the problem of designing a protocol to optimize the equilibrium behavior of a network with selfish users. We consider network cost-sharing games, where the set of Nash equilibria depends fundamentally on the choice of an edge cost-sharing protocol. Previous research focused on the Shapley protocol, in which the cost of each edge is shared equally among its users. We systematically study the design of optimal cost-sharing protocols for undirected and directed graphs, single-sink and multicommodity networks, and different measures of the inefficiency of equilibria. Our primary technical tool is a precise characterization of the cost-sharing protocols that induce only network games with pure-strategy Nash equilibria. We use this characterization to prove, among other results, that the Shapley protocol is optimal in directed graphs and that simple priority protocols are essentially optimal in undirected graphs

    Complexity Theory, Game Theory, and Economics: The Barbados Lectures

    Full text link
    This document collects the lecture notes from my mini-course "Complexity Theory, Game Theory, and Economics," taught at the Bellairs Research Institute of McGill University, Holetown, Barbados, February 19--23, 2017, as the 29th McGill Invitational Workshop on Computational Complexity. The goal of this mini-course is twofold: (i) to explain how complexity theory has helped illuminate several barriers in economics and game theory; and (ii) to illustrate how game-theoretic questions have led to new and interesting complexity theory, including recent several breakthroughs. It consists of two five-lecture sequences: the Solar Lectures, focusing on the communication and computational complexity of computing equilibria; and the Lunar Lectures, focusing on applications of complexity theory in game theory and economics. No background in game theory is assumed.Comment: Revised v2 from December 2019 corrects some errors in and adds some recent citations to v1 Revised v3 corrects a few typos in v

    Computational Models of Algorithmic Trading in Financial Markets.

    Full text link
    Today's trading landscape is a fragmented and complex system of interconnected electronic markets in which algorithmic traders are responsible for the majority of trading activity. Questions about the effects of algorithmic trading naturally lend themselves to a computational approach, given the nature of the algorithms involved and the electronic systems in place for processing and matching orders. To better understand the economic implications of algorithmic trading, I construct computational agent-based models of scenarios with investors interacting with various algorithmic traders. I employ the simulation-based methodology of empirical game-theoretic analysis to characterize trader behavior in equilibrium under different market conditions. I evaluate the impact of algorithmic trading and market structure within three different scenarios. First, I examine the impact of a market maker on trading gains in a variety of environments. A market maker facilitates trade and supplies liquidity by simultaneously maintaining offers to buy and sell. I find that market making strongly tends to increase total welfare and the market maker is itself profitable. Market making may or may not benefit investors, however, depending on market thickness, investor impatience, and the number of trading opportunities. Second, I investigate the interplay between market fragmentation and latency arbitrage, a type of algorithmic trading strategy in which traders exercise superior speed in order to exploit price disparities between exchanges. I show that the presence of a latency arbitrageur degrades allocative efficiency in continuous markets. Periodic clearing at regular intervals, as in a frequent call market, not only eliminates the opportunity for latency arbitrage but also significantly improves welfare. Lastly, I study whether frequent call markets could potentially coexist alongside the continuous trading mechanisms employed by virtually all modern exchanges. I examine the strategic behavior of fast and slow traders who submit orders to either a frequent call market or a continuous double auction. I model this as a game of market choice, and I find strong evidence of a predator-prey relationship between fast and slow traders: the fast traders prefer to be with slower agents regardless of market, and slow traders ultimately seek the protection of the frequent call market.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120811/1/ewah_1.pd

    Learning the Structure and Parameters of Large-Population Graphical Games from Behavioral Data

    Full text link
    We consider learning, from strictly behavioral data, the structure and parameters of linear influence games (LIGs), a class of parametric graphical games introduced by Irfan and Ortiz (2014). LIGs facilitate causal strategic inference (CSI): Making inferences from causal interventions on stable behavior in strategic settings. Applications include the identification of the most influential individuals in large (social) networks. Such tasks can also support policy-making analysis. Motivated by the computational work on LIGs, we cast the learning problem as maximum-likelihood estimation (MLE) of a generative model defined by pure-strategy Nash equilibria (PSNE). Our simple formulation uncovers the fundamental interplay between goodness-of-fit and model complexity: good models capture equilibrium behavior within the data while controlling the true number of equilibria, including those unobserved. We provide a generalization bound establishing the sample complexity for MLE in our framework. We propose several algorithms including convex loss minimization (CLM) and sigmoidal approximations. We prove that the number of exact PSNE in LIGs is small, with high probability; thus, CLM is sound. We illustrate our approach on synthetic data and real-world U.S. congressional voting records. We briefly discuss our learning framework's generality and potential applicability to general graphical games.Comment: Journal of Machine Learning Research. (accepted, pending publication.) Last conference version: submitted March 30, 2012 to UAI 2012. First conference version: entitled, Learning Influence Games, initially submitted on June 1, 2010 to NIPS 201

    Leveraging repeated games for solving complex multiagent decision problems

    Get PDF
    Prendre de bonnes dĂ©cisions dans des environnements multiagents est une tĂąche difficile dans la mesure oĂč la prĂ©sence de plusieurs dĂ©cideurs implique des conflits d'intĂ©rĂȘts, un manque de coordination, et une multiplicitĂ© de dĂ©cisions possibles. Si de plus, les dĂ©cideurs interagissent successivement Ă  travers le temps, ils doivent non seulement dĂ©cider ce qu'il faut faire actuellement, mais aussi comment leurs dĂ©cisions actuelles peuvent affecter le comportement des autres dans le futur. La thĂ©orie des jeux est un outil mathĂ©matique qui vise Ă  modĂ©liser ce type d'interactions via des jeux stratĂ©giques Ă  plusieurs joueurs. Des lors, les problĂšmes de dĂ©cision multiagent sont souvent Ă©tudiĂ©s en utilisant la thĂ©orie des jeux. Dans ce contexte, et si on se restreint aux jeux dynamiques, les problĂšmes de dĂ©cision multiagent complexes peuvent ĂȘtre approchĂ©s de façon algorithmique. La contribution de cette thĂšse est triple. PremiĂšrement, elle contribue Ă  un cadre algorithmique pour la planification distribuĂ©e dans les jeux dynamiques non-coopĂ©ratifs. La multiplicitĂ© des plans possibles est Ă  l'origine de graves complications pour toute approche de planification. Nous proposons une nouvelle approche basĂ©e sur la notion d'apprentissage dans les jeux rĂ©pĂ©tĂ©s. Une telle approche permet de surmonter lesdites complications par le biais de la communication entre les joueurs. Nous proposons ensuite un algorithme d'apprentissage pour les jeux rĂ©pĂ©tĂ©s en ``self-play''. Notre algorithme permet aux joueurs de converger, dans les jeux rĂ©pĂ©tĂ©s initialement inconnus, vers un comportement conjoint optimal dans un certain sens bien dĂ©fini, et ce, sans aucune communication entre les joueurs. Finalement, nous proposons une famille d'algorithmes de rĂ©solution approximative des jeux dynamiques et d'extraction des stratĂ©gies des joueurs. Dans ce contexte, nous proposons tout d'abord une mĂ©thode pour calculer un sous-ensemble non vide des Ă©quilibres approximatifs parfaits en sous-jeu dans les jeux rĂ©pĂ©tĂ©s. Nous montrons ensuite comment nous pouvons Ă©tendre cette mĂ©thode pour approximer tous les Ă©quilibres parfaits en sous-jeu dans les jeux rĂ©pĂ©tĂ©s, et aussi rĂ©soudre des jeux dynamiques plus complexes.Making good decisions in multiagent environments is a hard problem in the sense that the presence of several decision makers implies conflicts of interests, a lack of coordination, and a multiplicity of possible decisions. If, then, the same decision makers interact continuously through time, they have to decide not only what to do in the present, but also how their present decisions may affect the behavior of the others in the future. Game theory is a mathematical tool that aims to model such interactions as strategic games of multiple players. Therefore, multiagent decision problems are often studied using game theory. In this context, and being restricted to dynamic games, complex multiagent decision problems can be algorithmically approached. The contribution of this thesis is three-fold. First, this thesis contributes an algorithmic framework for distributed planning in non-cooperative dynamic games. The multiplicity of possible plans is a matter of serious complications for any planning approach. We propose a novel approach based on the concept of learning in repeated games. Our approach permits overcoming the aforementioned complications by means of communication between players. We then propose a learning algorithm for repeated game self-play. Our algorithm allows players to converge, in an initially unknown repeated game, to a joint behavior optimal in a certain, well-defined sense, without communication between players. Finally, we propose a family of algorithms for approximately solving dynamic games, and for extracting equilibrium strategy profiles. In this context, we first propose a method to compute a nonempty subset of approximate subgame-perfect equilibria in repeated games. We then demonstrate how to extend this method for approximating all subgame-perfect equilibria in repeated games, and also for solving more complex dynamic games

    Player agency in interactive narrative: audience, actor & author

    Get PDF
    The question motivating this review paper is, how can computer-based interactive narrative be used as a constructivist learn- ing activity? The paper proposes that player agency can be used to link interactive narrative to learner agency in constructivist theory, and to classify approaches to interactive narrative. The traditional question driving research in interactive narrative is, ‘how can an in- teractive narrative deal with a high degree of player agency, while maintaining a coherent and well-formed narrative?’ This question derives from an Aristotelian approach to interactive narrative that, as the question shows, is inherently antagonistic to player agency. Within this approach, player agency must be restricted and manip- ulated to maintain the narrative. Two alternative approaches based on Brecht’s Epic Theatre and Boal’s Theatre of the Oppressed are reviewed. If a Boalian approach to interactive narrative is taken the conflict between narrative and player agency dissolves. The question that emerges from this approach is quite different from the traditional question above, and presents a more useful approach to applying in- teractive narrative as a constructivist learning activity
    • 

    corecore