521 research outputs found

    Comparative Statics of Altruism and Spite

    Get PDF
    The equilibrium outcome of a strategic interaction between two or more people may depend on the weight they place on each other’s payoff. A positive, negative or zero weight represents altruism, spite or complete selfishness, respectively. Paradoxically, the real, material payoff in equilibrium for a group of altruists may be lower than for selfish or spiteful groups. However, this can only be so if the equilibria involved are unstable. If they are stable, the total (equivalently, average) payoff can only increase or remain unchanged with an increasing degree of altruism.Altruism, spite, comparative statics, strategic games, stability of equilibrium

    Static Stability in Games

    Get PDF
    Static stability of equilibrium in strategic games differs from dynamic stability in not being linked to any particular dynamical system. In other words, it does not make any assumptions about off-equilibrium behavior. Examples of static notions of stability include evolutionarily stable strategy (ESS) and continuously stable strategy (CSS), both of which are meaningful or justifiable only for particular classes of games, namely, symmetric multilinear games or symmetric games with a unidimensional strategy space, respectively. This paper presents a general notion of local static stability, of which the above two are essentially special cases. It is applicable to virtually all n-person strategic games, both symmetric and asymmetric, with non-discrete strategy spaces.Stability of equilibrium, static stability

    Quantum strategies

    Full text link
    We consider game theory from the perspective of quantum algorithms. Strategies in classical game theory are either pure (deterministic) or mixed (probabilistic). We introduce these basic ideas in the context of a simple example, closely related to the traditional Matching Pennies game. While not every two-person zero-sum finite game has an equilibrium in the set of pure strategies, von Neumann showed that there is always an equilibrium at which each player follows a mixed strategy. A mixed strategy deviating from the equilibrium strategy cannot increase a player's expected payoff. We show, however, that in our example a player who implements a quantum strategy can increase his expected payoff, and explain the relation to efficient quantum algorithms. We prove that in general a quantum strategy is always at least as good as a classical one, and furthermore that when both players use quantum strategies there need not be any equilibrium, but if both are allowed mixed quantum strategies there must be.Comment: 8 pages, plain TeX, 1 figur

    Potential games in volatile environments

    Get PDF
    This papers studies the co-evolution of networks and play in the context of finite population potential games. Action revision, link creation and link destruction are combined in a continuous-time Markov process. I derive the unique invariant distribution of this process in closed form, as well as the marginal distribution over action profiles and the conditional distribution over networks. It is shown that the equilibrium interaction topology is an inhomogeneous random graph. Furthermore, a characterization of the set of stochastically stable states is provided, generalizing existing results to models with endogenous interaction structures.

    Bayesian games with a continuum of states

    Get PDF
    We show that every Bayesian game with purely atomic types has a measurable Bayesian equilibrium when the common knowl- edge relation is smooth. Conversely, for any common knowledge rela- tion that is not smooth, there exists a type space that yields this common knowledge relation and payoffs such that the resulting Bayesian game will not have any Bayesian equilibrium. We show that our smoothness condition also rules out two paradoxes involving Bayesian games with a continuum of types: the impossibility of having a common prior on components when a common prior over the entire state space exists, and the possibility of interim betting/trade even when no such trade can be supported ex ante

    Cooperative Control and Potential Games

    Get PDF
    We present a view of cooperative control using the language of learning in games. We review the game-theoretic concepts of potential and weakly acyclic games, and demonstrate how several cooperative control problems, such as consensus and dynamic sensor coverage, can be formulated in these settings. Motivated by this connection, we build upon game-theoretic concepts to better accommodate a broader class of cooperative control problems. In particular, we extend existing learning algorithms to accommodate restricted action sets caused by the limitations of agent capabilities and group based decision making. Furthermore, we also introduce a new class of games called sometimes weakly acyclic games for time-varying objective functions and action sets, and provide distributed algorithms for convergence to an equilibrium

    Evolutionary Game Theory Squared: Evolving Agents in Endogenously Evolving Zero-Sum Games

    Full text link
    The predominant paradigm in evolutionary game theory and more generally online learning in games is based on a clear distinction between a population of dynamic agents that interact given a fixed, static game. In this paper, we move away from the artificial divide between dynamic agents and static games, to introduce and analyze a large class of competitive settings where both the agents and the games they play evolve strategically over time. We focus on arguably the most archetypal game-theoretic setting -- zero-sum games (as well as network generalizations) -- and the most studied evolutionary learning dynamic -- replicator, the continuous-time analogue of multiplicative weights. Populations of agents compete against each other in a zero-sum competition that itself evolves adversarially to the current population mixture. Remarkably, despite the chaotic coevolution of agents and games, we prove that the system exhibits a number of regularities. First, the system has conservation laws of an information-theoretic flavor that couple the behavior of all agents and games. Secondly, the system is Poincar\'{e} recurrent, with effectively all possible initializations of agents and games lying on recurrent orbits that come arbitrarily close to their initial conditions infinitely often. Thirdly, the time-average agent behavior and utility converge to the Nash equilibrium values of the time-average game. Finally, we provide a polynomial time algorithm to efficiently predict this time-average behavior for any such coevolving network game.Comment: To appear in AAAI 202
    corecore