10,436 research outputs found

    Stability and Equilibrium Selection in a Link Formation Game

    Get PDF
    In this paper we use a non cooperative equilibrium selection approach as a notion of stability in link formation games. Specifically, we follow the global games approach first introduced by Carlsson and van Damme (1993), to study the robustness of the set of Nash equilibria for a class of link formation games in strategic form with supermodular payoff functions. Interestingly, the equilibrium selected is in conflict with those predicted by the traditional cooperative refinements. Moreover, we get a conflict between stability and efficiency even when no such conflict exists with the cooperative refinements. We discuss some practical issues that these different theoretical approaches raise in reality. The paper also provides an extension of the global game theory that can be applied beyond network literature.Global Games, Equilibrium Selection, Networks.

    Global Games with Strategic Substitutes

    Get PDF
    In this paper we use a non cooperative equilibrium selection approach as a notion of stability in link formation games. Specifically, we follow the global games approach first introduced by Carlsson and van Damme (1993), to study the robustness of the set of Nash equilibria for a class of link formation games in strategic form with supermodular payo. functions. Interestingly, the equilibrium selected is in conflict with those predicted by the traditional cooperative refinements. Moreover, we get a conflict between stability and e.ciency even when no such conflict exists with the cooperative refinements. We discuss some practical issues that these di.erent theoretical approaches raise in reality. The paper also provides an extension of the global game theory that can be applied beyond network literature.Games, Networks, Equilibrium Selection.

    Distributed dynamic reinforcement of efficient outcomes in multiagent coordination and network formation

    Get PDF
    We analyze reinforcement learning under so-called “dynamic reinforcement”. In reinforcement learning, each agentrepeatedly interacts with an unknown environment (i.e., other agents), receives a reward, and updates the probabilities of its next action based on its own previous actions and received rewards. Unlike standard reinforcement learning, dynamic reinforcement uses a combination of long term rewards and recent rewards to construct myopically forward looking action selection probabilities. We analyze the long term stability of the learning dynamics for general games with pure strategy Nash equilibria and specialize the results for coordination games and distributed network formation. In this class of problems, more than one stable equilibrium (i.e., coordination configuration) may exist. We demonstrate equilibrium selection under dynamic reinforcement. In particular, we show how a single agent is able to destabilize an equilibrium in favor of another by appropriately adjusting its dynamic reinforcement parameters. We contrast the conclusions with prior game theoretic results according to which the risk dominant equilibrium is the only robust equilibrium when agents ’ decisions are subject to small randomized perturbations. The analysis throughout is based on the ODE method for stochastic approximations, where a special form of perturbation in the learning dynamics allows for analyzing its behavior at the boundary points of the state space

    Stochastic Coalitional Better-response Dynamics and Strong Nash Equilibrium

    Get PDF
    We consider coalition formation among players in an n-player finite strategic game over infinite horizon. At each time a randomly formed coalition makes a joint deviation from a current action profile such that at new action profile all players from the coalition are strictly benefited. Such deviations define a coalitional better-response (CBR) dynamics that is in general stochastic. The CBR dynamics either converges to a strong Nash equilibrium or stucks in a closed cycle. We also assume that at each time a selected coalition makes mistake in deviation with small probability that add mutations (perturbations) into CBR dynamics. We prove that all strong Nash equilibria and closed cycles are stochastically stable, i.e., they are selected by perturbed CBR dynamics as mutations vanish. Similar statement holds for strict strong Nash equilibrium. We apply CBR dynamics to the network formation games and we prove that all strongly stable networks and closed cycles are stochastically stable

    Evolutionary game theory: Temporal and spatial effects beyond replicator dynamics

    Get PDF
    Evolutionary game dynamics is one of the most fruitful frameworks for studying evolution in different disciplines, from Biology to Economics. Within this context, the approach of choice for many researchers is the so-called replicator equation, that describes mathematically the idea that those individuals performing better have more offspring and thus their frequency in the population grows. While very many interesting results have been obtained with this equation in the three decades elapsed since it was first proposed, it is important to realize the limits of its applicability. One particularly relevant issue in this respect is that of non-mean-field effects, that may arise from temporal fluctuations or from spatial correlations, both neglected in the replicator equation. This review discusses these temporal and spatial effects focusing on the non-trivial modifications they induce when compared to the outcome of replicator dynamics. Alongside this question, the hypothesis of linearity and its relation to the choice of the rule for strategy update is also analyzed. The discussion is presented in terms of the emergence of cooperation, as one of the current key problems in Biology and in other disciplines.Comment: Review, 48 pages, 26 figure

    Approximate Equilibrium and Incentivizing Social Coordination

    Full text link
    We study techniques to incentivize self-interested agents to form socially desirable solutions in scenarios where they benefit from mutual coordination. Towards this end, we consider coordination games where agents have different intrinsic preferences but they stand to gain if others choose the same strategy as them. For non-trivial versions of our game, stable solutions like Nash Equilibrium may not exist, or may be socially inefficient even when they do exist. This motivates us to focus on designing efficient algorithms to compute (almost) stable solutions like Approximate Equilibrium that can be realized if agents are provided some additional incentives. Our results apply in many settings like adoption of new products, project selection, and group formation, where a central authority can direct agents towards a strategy but agents may defect if they have better alternatives. We show that for any given instance, we can either compute a high quality approximate equilibrium or a near-optimal solution that can be stabilized by providing small payments to some players. We then generalize our model to encompass situations where player relationships may exhibit complementarities and present an algorithm to compute an Approximate Equilibrium whose stability factor is linear in the degree of complementarity. Our results imply that a little influence is necessary in order to ensure that selfish players coordinate and form socially efficient solutions.Comment: A preliminary version of this work will appear in AAAI-14: Twenty-Eighth Conference on Artificial Intelligenc
    • 

    corecore