26,952 research outputs found

    Learning the Structure and Parameters of Large-Population Graphical Games from Behavioral Data

    Full text link
    We consider learning, from strictly behavioral data, the structure and parameters of linear influence games (LIGs), a class of parametric graphical games introduced by Irfan and Ortiz (2014). LIGs facilitate causal strategic inference (CSI): Making inferences from causal interventions on stable behavior in strategic settings. Applications include the identification of the most influential individuals in large (social) networks. Such tasks can also support policy-making analysis. Motivated by the computational work on LIGs, we cast the learning problem as maximum-likelihood estimation (MLE) of a generative model defined by pure-strategy Nash equilibria (PSNE). Our simple formulation uncovers the fundamental interplay between goodness-of-fit and model complexity: good models capture equilibrium behavior within the data while controlling the true number of equilibria, including those unobserved. We provide a generalization bound establishing the sample complexity for MLE in our framework. We propose several algorithms including convex loss minimization (CLM) and sigmoidal approximations. We prove that the number of exact PSNE in LIGs is small, with high probability; thus, CLM is sound. We illustrate our approach on synthetic data and real-world U.S. congressional voting records. We briefly discuss our learning framework's generality and potential applicability to general graphical games.Comment: Journal of Machine Learning Research. (accepted, pending publication.) Last conference version: submitted March 30, 2012 to UAI 2012. First conference version: entitled, Learning Influence Games, initially submitted on June 1, 2010 to NIPS 201

    Mean Field Equilibrium in Dynamic Games with Complementarities

    Full text link
    We study a class of stochastic dynamic games that exhibit strategic complementarities between players; formally, in the games we consider, the payoff of a player has increasing differences between her own state and the empirical distribution of the states of other players. Such games can be used to model a diverse set of applications, including network security models, recommender systems, and dynamic search in markets. Stochastic games are generally difficult to analyze, and these difficulties are only exacerbated when the number of players is large (as might be the case in the preceding examples). We consider an approximation methodology called mean field equilibrium to study these games. In such an equilibrium, each player reacts to only the long run average state of other players. We find necessary conditions for the existence of a mean field equilibrium in such games. Furthermore, as a simple consequence of this existence theorem, we obtain several natural monotonicity properties. We show that there exist a "largest" and a "smallest" equilibrium among all those where the equilibrium strategy used by a player is nondecreasing, and we also show that players converge to each of these equilibria via natural myopic learning dynamics; as we argue, these dynamics are more reasonable than the standard best response dynamics. We also provide sensitivity results, where we quantify how the equilibria of such games move in response to changes in parameters of the game (e.g., the introduction of incentives to players).Comment: 56 pages, 5 figure

    Evolutionary Poisson Games for Controlling Large Population Behaviors

    Full text link
    Emerging applications in engineering such as crowd-sourcing and (mis)information propagation involve a large population of heterogeneous users or agents in a complex network who strategically make dynamic decisions. In this work, we establish an evolutionary Poisson game framework to capture the random, dynamic and heterogeneous interactions of agents in a holistic fashion, and design mechanisms to control their behaviors to achieve a system-wide objective. We use the antivirus protection challenge in cyber security to motivate the framework, where each user in the network can choose whether or not to adopt the software. We introduce the notion of evolutionary Poisson stable equilibrium for the game, and show its existence and uniqueness. Online algorithms are developed using the techniques of stochastic approximation coupled with the population dynamics, and they are shown to converge to the optimal solution of the controller problem. Numerical examples are used to illustrate and corroborate our results

    Robust stochastic stability

    Get PDF
    A strategy profile of a game is called robustly stochastically stable if it is stochastically stable for a given behavioral model independently of the specification of revision opportunities and tie-breaking assumptions in the dynamics. We provide a simple radius-coradius result for robust stochastic stability and examine several applications. For the logit-response dynamics, the selection of potential maximizers is robust for the subclass of supermodular symmetric binary-action games. For the mistakes model, the weaker property of strategic complementarity suffices for robustness in this class of games. We also investigate the robustness of the selection of risk-dominant strategies in coordination games under best-reply and the selection of Walrasian strategies in aggregative games under imitation.Learning in games, stochastic stability, radius-coradius theorems, logit-response dynamics, mutations, imitation

    Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog

    Full text link
    A number of recent works have proposed techniques for end-to-end learning of communication protocols among cooperative multi-agent populations, and have simultaneously found the emergence of grounded human-interpretable language in the protocols developed by the agents, all learned without any human supervision! In this paper, using a Task and Tell reference game between two agents as a testbed, we present a sequence of 'negative' results culminating in a 'positive' one -- showing that while most agent-invented languages are effective (i.e. achieve near-perfect task rewards), they are decidedly not interpretable or compositional. In essence, we find that natural language does not emerge 'naturally', despite the semblance of ease of natural-language-emergence that one may gather from recent literature. We discuss how it is possible to coax the invented languages to become more and more human-like and compositional by increasing restrictions on how two agents may communicate.Comment: 9 pages, 7 figures, 2 tables, accepted at EMNLP 2017 as short pape

    Competitive Gradient Descent

    Get PDF
    We introduce a new algorithm for the numerical computation of Nash equilibria of competitive two-player games. Our method is a natural generalization of gradient descent to the two-player setting where the update is given by the Nash equilibrium of a regularized bilinear local approximation of the underlying game. It avoids oscillatory and divergent behaviors seen in alternating gradient descent. Using numerical experiments and rigorous analysis, we provide a detailed comparison to methods based on \emph{optimism} and \emph{consensus} and show that our method avoids making any unnecessary changes to the gradient dynamics while achieving exponential (local) convergence for (locally) convex-concave zero sum games. Convergence and stability properties of our method are robust to strong interactions between the players, without adapting the stepsize, which is not the case with previous methods. In our numerical experiments on non-convex-concave problems, existing methods are prone to divergence and instability due to their sensitivity to interactions among the players, whereas we never observe divergence of our algorithm. The ability to choose larger stepsizes furthermore allows our algorithm to achieve faster convergence, as measured by the number of model evaluations.Comment: Appeared in NeurIPS 2019. This version corrects an error in theorem 2.2. Source code used for the numerical experiments can be found under http://github.com/f-t-s/CGD. A high-level overview of this work can be found under http://f-t-s.github.io/projects/cgd
    corecore