5 research outputs found

    Infinite Separation between General and Chromatic Memory

    Full text link
    In this note, we answer a question from [Alexander Kozachinskiy. State Complexity of Chromatic Memory in Infinite-Duration Games, arXiv:2201.09297]. Namely, we construct a winning condition WW over a finite set of colors such that, first, every finite arena has a strategy with 2 states of general memory which is optimal with respect to WW, and second, there exists no kk such that every finite arena has a strategy with kk states of chromatic memory which is optimal with respect to WW

    From Local to Global Determinacy in Concurrent Graph Games

    Get PDF
    In general, finite concurrent two-player reachability games are only determined in a weak sense: the supremum probability to win can be approached via stochastic strategies, but cannot be realized. We introduce a class of concurrent games that are determined in a much stronger sense, and in a way, it is the largest class with this property. To this end, we introduce the notion of local interaction at a state of a graph game: it is a game form whose outcomes (i.e. a table whose entries) are the next states, which depend on the concurrent actions of the players. By definition, a game form is determined iff it always yields games that are determined via deterministic strategies when used as a local interaction in a Nature-free, one-shot reachability game. We show that if all the local interactions of a graph game with Borel objective are determined game forms, the game itself is determined: if Nature does not play, one player has a winning strategy; if Nature plays, both players have deterministic strategies that maximize the probability to win. This constitutes a clear-cut separation: either a game form behaves poorly already when used alone with basic objectives, or it behaves well even when used together with other well-behaved game forms and complex objectives. Existing results for positional and finite-memory determinacy in turn-based games are extended this way to concurrent games with determined local interactions (CG-DLI)

    Characterizing Omega-Regularity Through Finite-Memory Determinacy of Games on Infinite Graphs

    Get PDF
    We consider zero-sum games on infinite graphs, with objectives specified as sets of infinite words over some alphabet of colors. A well-studied class of objectives is the one of ?-regular objectives, due to its relation to many natural problems in theoretical computer science. We focus on the strategy complexity question: given an objective, how much memory does each player require to play as well as possible? A classical result is that finite-memory strategies suffice for both players when the objective is ?-regular. We show a reciprocal of that statement: when both players can play optimally with a chromatic finite-memory structure (i.e., whose updates can only observe colors) in all infinite game graphs, then the objective must be ?-regular. This provides a game-theoretic characterization of ?-regular objectives, and this characterization can help in obtaining memory bounds. Moreover, a by-product of our characterization is a new one-to-two-player lift: to show that chromatic finite-memory structures suffice to play optimally in two-player games on infinite graphs, it suffices to show it in the simpler case of one-player games on infinite graphs. We illustrate our results with the family of discounted-sum objectives, for which ?-regularity depends on the value of some parameters

    One-To-Two-Player Lifting for Mildly Growing Memory

    Get PDF
    We investigate a phenomenon of "one-to-two-player lifting" in infinite-duration two-player games on graphs with zero-sum objectives. More specifically, let ? be a class of strategies. It turns out that in many cases, to show that all two-player games on graphs with a given payoff function are determined in ?, it is sufficient to do so for one-player games. That is, in many cases the determinacy in ? can be "lifted" from one-player games to two-player games. Namely, Gimbert and Zielonka (CONCUR 2005) have shown this for the class of positional strategies. Recently, Bouyer et al. (CONCUR 2020) have extended this to the classes of arena-independent finite-memory strategies. Informally, these are finite-memory strategies that use the same way of storing memory in all game graphs. In this paper, we put the lifting technique into the context of memory complexity. The memory complexity of a payoff function measures, how many states of memory we need to play optimally in game graphs with up to n nodes, depending on n. We address the following question. Assume that we know the memory complexity of our payoff function in one-player games. Then what can be said about its memory complexity in two-player games? In particular, when is it finite? In this paper, we answer this questions for strategies with "chromatic" memory. These are strategies that only accumulate sequences of colors of edges in their memory. We obtain the following results. - Assume that the chromatic memory complexity in one-player games is sublinear in n on some infinite subsequence. Then the chromatic memory complexity in two-player games is finite. - We provide an example in which (a) the chromatic memory complexity in one-player games is linear in n; (b) the memory complexity in two-player games is infinite. Thus, we obtain the exact barrier for the one-to-two-player lifting theorems in the setting of chromatic finite-memory strategies. Previous results only cover payoff functions with constant chromatic memory complexity

    Time-aware uniformization of winning strategies

    No full text
    International audienceTwo-player win/lose games of infinite duration are involved in several disciplines including computer science and logic. If such a game has deterministic winning strategies, one may ask how simple such strategies can get. The answer may help with actual implementation, or to win despite imperfect information, or to conceal sensitive information especially if the game is repeated. Given a concurrent two-player win/lose game of infinite duration, this article considers equivalence relations over histories of played actions. A classical restriction used here is that equivalent histories have equal length, hence time awareness. A sufficient condition is given such that if a player has winning strategies, she has one that prescribes the same action at equivalent histories, hence uniformization. The proof is fairly constructive and preserves finiteness of strategy memory, and counterexamples show relative tightness of the result. Several corollaries follow for games with states and colors
    corecore