54 research outputs found
Strategy Complexity of Parity Objectives in Countable MDPs
We study countably infinite MDPs with parity objectives. Unlike in finite
MDPs, optimal strategies need not exist, and may require infinite memory if
they do. We provide a complete picture of the exact strategy complexity of
-optimal strategies (and optimal strategies, where they exist) for
all subclasses of parity objectives in the Mostowski hierarchy. Either
MD-strategies, Markov strategies, or 1-bit Markov strategies are necessary and
sufficient, depending on the number of colors, the branching degree of the MDP,
and whether one considers -optimal or optimal strategies. In
particular, 1-bit Markov strategies are necessary and sufficient for
-optimal (resp. optimal) strategies for general parity objectives.Comment: This is the full version of a paper presented at CONCUR 202
Randomness for Free
We consider two-player zero-sum games on graphs. These games can be
classified on the basis of the information of the players and on the mode of
interaction between them. On the basis of information the classification is as
follows: (a) partial-observation (both players have partial view of the game);
(b) one-sided complete-observation (one player has complete observation); and
(c) complete-observation (both players have complete view of the game). On the
basis of mode of interaction we have the following classification: (a)
concurrent (both players interact simultaneously); and (b) turn-based (both
players interact in turn). The two sources of randomness in these games are
randomness in transition function and randomness in strategies. In general,
randomized strategies are more powerful than deterministic strategies, and
randomness in transitions gives more general classes of games. In this work we
present a complete characterization for the classes of games where randomness
is not helpful in: (a) the transition function probabilistic transition can be
simulated by deterministic transition); and (b) strategies (pure strategies are
as powerful as randomized strategies). As consequence of our characterization
we obtain new undecidability results for these games
A survey of stochastic ω regular games
We summarize classical and recent results about two-player games played on graphs with ω-regular objectives. These games have applications in the verification and synthesis of reactive systems. Important distinctions are whether a graph game is turn-based or concurrent; deterministic or stochastic; zero-sum or not. We cluster known results and open problems according to these classifications
Approximating the Value of Energy-Parity Objectives in Simple Stochastic Games
We consider simple stochastic games G with energy-parity objectives, a combination of quantitative rewards with a qualitative parity condition. The Maximizer tries to avoid running out of energy while simultaneously satisfying a parity condition.
We present an algorithm to approximate the value of a given configuration in 2-NEXPTIME. Moreover, ?-optimal strategies for either player require at most O(2-EXP(|G|)?log(1/?)) memory modes
Verification problems for timed and probabilistic extensions of Petri Nets
In the first part of the thesis, we prove the decidability (and PSPACE-completeness) of
the universal safety property on a timed extension of Petri Nets, called Timed Petri Nets.
Every token has a real-valued clock (a.k.a. age), and transition firing is constrained by
the clock values that have integer bounds (using strict and non-strict inequalities). The
newly created tokens can either inherit the age from an input token of the transition or
it can be reset to zero.
In the second part of the thesis, we refer to systems with controlled behaviour that
are probabilistic extensions of VASS and One-Counter Automata. Firstly, we consider
infinite state Markov Decision Processes (MDPs) that are induced by probabilistic
extensions of VASS, called VASS-MDPs. We show that most of the qualitative problems
for general VASS-MDPs are undecidable, and consider a monotone subclass in which
only the controller can change the counter values, called 1-VASS-MDPs. In particular,
we show that limit-sure control state reachability for 1-VASS-MDPs is decidable, i.e.,
checking whether one can reach a set of control states with probability arbitrarily close
to 1. Unlike for finite state MDPs, the control state reachability property may hold limit
surely (i.e. using an infinite family of strategies, each of which achieving the objective
with probability ≥ 1-e, for every e > 0), but not almost surely (i.e. with probability 1).
Secondly, we consider infinite state MDPs that are induced by probabilistic extensions of
One-Counter Automata, called One-Counter Markov Decision Processes (OC-MDPs).
We show that the almost-sure {1;2;3}-Parity problem for OC-MDPs is at least as hard
as the limit-sure selective termination problem for OC-MDPs, in which one would
like to reach a particular set of control states and counter value zero with probability
arbitrarily close to 1
- …