26 research outputs found

    The Complexity of Nash Equilibria in Simple Stochastic Multiplayer Games

    Get PDF
    We analyse the computational complexity of finding Nash equilibria in simple stochastic multiplayer games. We show that restricting the search space to equilibria whose payoffs fall into a certain interval may lead to undecidability. In particular, we prove that the following problem is undecidable: Given a game G, does there exist a pure-strategy Nash equilibrium of G where player 0 wins with probability 1. Moreover, this problem remains undecidable if it is restricted to strategies with (unbounded) finite memory. However, if mixed strategies are allowed, decidability remains an open problem. One way to obtain a provably decidable variant of the problem is restricting the strategies to be positional or stationary. For the complexity of these two problems, we obtain a common lower bound of NP and upper bounds of NP and PSPACE respectively.Comment: 23 pages; revised versio

    Decision Problems for Nash Equilibria in Stochastic Games

    Get PDF
    We analyse the computational complexity of finding Nash equilibria in stochastic multiplayer games with ω\omega-regular objectives. While the existence of an equilibrium whose payoff falls into a certain interval may be undecidable, we single out several decidable restrictions of the problem. First, restricting the search space to stationary, or pure stationary, equilibria results in problems that are typically contained in PSPACE and NP, respectively. Second, we show that the existence of an equilibrium with a binary payoff (i.e. an equilibrium where each player either wins or loses with probability 1) is decidable. We also establish that the existence of a Nash equilibrium with a certain binary payoff entails the existence of an equilibrium with the same payoff in pure, finite-state strategies.Comment: 22 pages, revised versio

    On the equivalence of game and denotational semantics for the probabilistic mu-calculus

    Full text link
    The probabilistic (or quantitative) modal mu-calculus is a fixed-point logic de- signed for expressing properties of probabilistic labeled transition systems (PLTS). Two semantics have been studied for this logic, both assigning to every process state a value in the interval [0,1] representing the probability that the property expressed by the formula holds at the state. One semantics is denotational and the other is a game semantics, specified in terms of two-player stochastic games. The two semantics have been proved to coincide on all finite PLTS's, but the equivalence of the two semantics on arbitrary models has been open in literature. In this paper we prove that the equivalence indeed holds for arbitrary infinite models, and thus our result strengthens the fruitful connection between denotational and game semantics. Our proof adapts the unraveling or unfolding method, a general proof technique for proving result of parity games by induction on their complexity

    Synthesising Strategy Improvement and Recursive Algorithms for Solving 2.5 Player Parity Games

    Get PDF
    2.5 player parity games combine the challenges posed by 2.5 player reachability games and the qualitative analysis of parity games. These two types of problems are best approached with different types of algorithms: strategy improvement algorithms for 2.5 player reachability games and recursive algorithms for the qualitative analysis of parity games. We present a method that - in contrast to existing techniques - tackles both aspects with the best suited approach and works exclusively on the 2.5 player game itself. The resulting technique is powerful enough to handle games with several million states

    Two-Player Perfect-Information Shift-Invariant Submixing Stochastic Games Are Half-Positional

    Full text link
    We consider zero-sum stochastic games with perfect information and finitely many states and actions. The payoff is computed by a payoff function which associates to each infinite sequence of states and actions a real number. We prove that if the the payoff function is both shift-invariant and submixing, then the game is half-positional, i.e. the first player has an optimal strategy which is both deterministic and stationary. This result relies on the existence of ϵ\epsilon-subgame-perfect equilibria in shift-invariant games, a second contribution of the paper

    When can you play positionnaly?

    Get PDF
    International audienceWe consider infinite antagonistic games over finite graphs. We present conditions that, whenever satisfied by the payoff mapping, assure for both players positional (memoryless) optimal strategies. We show that all popular payoff mappings, such as mean payoff, discounted, parity as well as several other payoffs satisfy these conditions. This approach allows to give a uniform treatment of otherwise disparate results concerning the existence of positional optimal strategies

    Perfect Information Stochastic Priority Games

    Get PDF
    International audienceWe introduce stochastic priority games - a new class of perfect information stochastic games. These games can take two different, but equivalent, forms. In stopping priority games a play can be stopped by the environment after a finite number of stages, however, infinite plays are also possible. In discounted priority games only infinite plays are possible and the payoff is a linear combination of the classical discount payoff and of a limit payoff evaluating the performance at infinity. Shapley games and parity games are special extreme cases of priority games

    How to Play in Infinite MDPs (Invited Talk)

    Get PDF
    International audienceMarkov decision processes (MDPs) are a standard model for dynamic systems that exhibit both stochastic and nondeterministic behavior. For MDPs with finite state space it is known that for a wide range of objectives there exist optimal strategies that are memoryless and deterministic. In contrast, if the state space is infinite, optimal strategies may not exist, and optimal or ε-optimal strategies may require (possibly infinite) memory. In this paper we consider qualitative objectives: reachability, safety, (co-)Büchi, and other parity objectives. We aim at giving an introduction to a collection of techniques that allow for the construction of strategies with little or no memory in countably infinite MDPs

    From Local to Global Determinacy in Concurrent Graph Games

    Get PDF
    In general, finite concurrent two-player reachability games are only determined in a weak sense: the supremum probability to win can be approached via stochastic strategies, but cannot be realized. We introduce a class of concurrent games that are determined in a much stronger sense, and in a way, it is the largest class with this property. To this end, we introduce the notion of local interaction at a state of a graph game: it is a game form whose outcomes (i.e. a table whose entries) are the next states, which depend on the concurrent actions of the players. By definition, a game form is determined iff it always yields games that are determined via deterministic strategies when used as a local interaction in a Nature-free, one-shot reachability game. We show that if all the local interactions of a graph game with Borel objective are determined game forms, the game itself is determined: if Nature does not play, one player has a winning strategy; if Nature plays, both players have deterministic strategies that maximize the probability to win. This constitutes a clear-cut separation: either a game form behaves poorly already when used alone with basic objectives, or it behaves well even when used together with other well-behaved game forms and complex objectives. Existing results for positional and finite-memory determinacy in turn-based games are extended this way to concurrent games with determined local interactions (CG-DLI)
    corecore