7,729 research outputs found

    Pure Nash Equilibria in Concurrent Deterministic Games

    Full text link
    We study pure-strategy Nash equilibria in multi-player concurrent deterministic games, for a variety of preference relations. We provide a novel construction, called the suspect game, which transforms a multi-player concurrent game into a two-player turn-based game which turns Nash equilibria into winning strategies (for some objective that depends on the preference relations of the players in the original game). We use that transformation to design algorithms for computing Nash equilibria in finite games, which in most cases have optimal worst-case complexity, for large classes of preference relations. This includes the purely qualitative framework, where each player has a single omega-regular objective that she wants to satisfy, but also the larger class of semi-quantitative objectives, where each player has several omega-regular objectives equipped with a preorder (for instance, a player may want to satisfy all her objectives, or to maximise the number of objectives that she achieves.)Comment: 72 page

    NIRA-GUI: A matlab application which solves for couple-constraint nash equibria from a symbolic specification

    No full text
    A powerful method for computing Nash equilibria in constrained, multi-player games is created when the relaxation algorithm and the Nikaido-Isoda function are used within a MATLAB application. This paper describes that application, which is able to solve static and open-loop dynamic games specifed symbolically

    NIRA-3: An improved MATLAB package for finding Nash equilibria in infinite games

    Get PDF
    A powerful method for computing Nash equilibria in constrained, multi-player games is created when the relaxation algorithm and the Nikaido-Isoda function are used together in a suite of MATLAB routines. This paper updates the MATLAB suite described in \cite{Berridge97} by adapting them to MATLAB 7. The suite is now capable of solving both static and open-loop dynamic games. An example solving a coupled constraints game using the suite is provided.Nikaido-Isoda function; Coupled constraints

    Zero-sum Polymatrix Markov Games: Equilibrium Collapse and Efficient Computation of Nash Equilibria

    Full text link
    The works of (Daskalakis et al., 2009, 2022; Jin et al., 2022; Deng et al., 2023) indicate that computing Nash equilibria in multi-player Markov games is a computationally hard task. This fact raises the question of whether or not computational intractability can be circumvented if one focuses on specific classes of Markov games. One such example is two-player zero-sum Markov games, in which efficient ways to compute a Nash equilibrium are known. Inspired by zero-sum polymatrix normal-form games (Cai et al., 2016), we define a class of zero-sum multi-agent Markov games in which there are only pairwise interactions described by a graph that changes per state. For this class of Markov games, we show that an ϵ\epsilon-approximate Nash equilibrium can be found efficiently. To do so, we generalize the techniques of (Cai et al., 2016), by showing that the set of coarse-correlated equilibria collapses to the set of Nash equilibria. Afterwards, it is possible to use any algorithm in the literature that computes approximate coarse-correlated equilibria Markovian policies to get an approximate Nash equilibrium.Comment: Added missing proofs for the infinite-horizo

    On minmax theorems for multiplayer games

    Get PDF
    We prove a generalization of von Neumann's minmax theorem to the class of separable multiplayer zero-sum games, introduced in [Bregman and Fokin 1998]. These games are polymatrix---that is, graphical games in which every edge is a two-player game between its endpoints---in which every outcome has zero total sum of players' payoffs. Our generalization of the minmax theorem implies convexity of equilibria, polynomial-time tractability, and convergence of no-regret learning algorithms to Nash equilibria. Given that Nash equilibria in 3-player zero-sum games are already PPAD-complete, this class of games, i.e. with pairwise separable utility functions, defines essentially the broadest class of multi-player constant-sum games to which we can hope to push tractability results. Our result is obtained by establishing a certain game-class collapse, showing that separable constant-sum games are payoff equivalent to pairwise constant-sum polymatrix games---polymatrix games in which all edges are constant-sum games, and invoking a recent result of [Daskalakis, Papadimitriou 2009] for these games. We also explore generalizations to classes of non-constant-sum multi-player games. A natural candidate is polymatrix games with strictly competitive games on their edges. In the two player setting, such games are minmax solvable and recent work has shown that they are merely affine transformations of zero-sum games [Adler, Daskalakis, Papadimitriou 2009]. Surprisingly we show that a polymatrix game comprising of strictly competitive games on its edges is PPAD-complete to solve, proving a striking difference in the complexity of networks of zero-sum and strictly competitive games. Finally, we look at the role of coordination in networked interactions, studying the complexity of polymatrix games with a mixture of coordination and zero-sum games. We show that finding a pure Nash equilibrium in coordination-only polymatrix games is PLS-complete; hence, computing a mixed Nash equilibrium is in PLS ∩ PPAD, but it remains open whether the problem is in P. If, on the other hand, coordination and zero-sum games are combined, we show that the problem becomes PPAD-complete, establishing that coordination and zero-sum games achieve the full generality of PPAD.National Science Foundation (U.S.) (CAREER Award CCF-0953960)Alfred P. Sloan Foundation (Fellowship

    New Algorithms for Approximate Nash Equilibria in Bimatrix Games

    Get PDF
    We consider the problem of computing additively approximate Nash equilibria in noncooperative two-player games. We provide a new polynomial time algorithm that achieves an approximation guarantee of 0.36392. We first provide a simpler algorithm, that achieves a 0.38197-approximation, which is exactly the same factor as the algorithm of Daskalakis, Mehta and Papadimitriou.This algorithm is then tuned, improving the approximation error to 0.36392. Our method is relatively fast and simple, as it requires solving only one linear program and it is based on using the solution of an auxiliary zero-sum game as a starting point. Finally we also exhibit a simple reduction that allows us to compute approximate equilibria for multi-player games by using algorithms for two-player games

    No-Regret Learning and Equilibrium Computation in Quantum Games

    Full text link
    As quantum processors advance, the emergence of large-scale decentralized systems involving interacting quantum-enabled agents is on the horizon. Recent research efforts have explored quantum versions of Nash and correlated equilibria as solution concepts of strategic quantum interactions, but these approaches did not directly connect to decentralized adaptive setups where agents possess limited information. This paper delves into the dynamics of quantum-enabled agents within decentralized systems that employ no-regret algorithms to update their behaviors over time. Specifically, we investigate two-player quantum zero-sum games and polymatrix quantum zero-sum games, showing that no-regret algorithms converge to separable quantum Nash equilibria in time-average. In the case of general multi-player quantum games, our work leads to a novel solution concept, (separable) quantum coarse correlated equilibria (QCCE), as the convergent outcome of the time-averaged behavior no-regret algorithms, offering a natural solution concept for decentralized quantum systems. Finally, we show that computing QCCEs can be formulated as a semidefinite program and establish the existence of entangled (i.e., non-separable) QCCEs, which cannot be approached via the current paradigm of no-regret learning
    corecore