27 research outputs found
Qualitative Analysis of VASS-Induced MDPs
We consider infinite-state Markov decision processes (MDPs) that are induced
by extensions of vector addition systems with states (VASS). Verification
conditions for these MDPs are described by reachability and Buchi objectives
w.r.t. given sets of control-states. We study the decidability of some
qualitative versions of these objectives, i.e., the decidability of whether
such objectives can be achieved surely, almost-surely, or limit-surely. While
most such problems are undecidable in general, some are decidable for large
subclasses in which either only the controller or only the random environment
can change the counter values (while the other side can only change
control-states).Comment: Extended version (including all proofs) of material presented at
FOSSACS 201
Verification problems for timed and probabilistic extensions of Petri Nets
In the first part of the thesis, we prove the decidability (and PSPACE-completeness) of
the universal safety property on a timed extension of Petri Nets, called Timed Petri Nets.
Every token has a real-valued clock (a.k.a. age), and transition firing is constrained by
the clock values that have integer bounds (using strict and non-strict inequalities). The
newly created tokens can either inherit the age from an input token of the transition or
it can be reset to zero.
In the second part of the thesis, we refer to systems with controlled behaviour that
are probabilistic extensions of VASS and One-Counter Automata. Firstly, we consider
infinite state Markov Decision Processes (MDPs) that are induced by probabilistic
extensions of VASS, called VASS-MDPs. We show that most of the qualitative problems
for general VASS-MDPs are undecidable, and consider a monotone subclass in which
only the controller can change the counter values, called 1-VASS-MDPs. In particular,
we show that limit-sure control state reachability for 1-VASS-MDPs is decidable, i.e.,
checking whether one can reach a set of control states with probability arbitrarily close
to 1. Unlike for finite state MDPs, the control state reachability property may hold limit
surely (i.e. using an infinite family of strategies, each of which achieving the objective
with probability â„ 1-e, for every e > 0), but not almost surely (i.e. with probability 1).
Secondly, we consider infinite state MDPs that are induced by probabilistic extensions of
One-Counter Automata, called One-Counter Markov Decision Processes (OC-MDPs).
We show that the almost-sure {1;2;3}-Parity problem for OC-MDPs is at least as hard
as the limit-sure selective termination problem for OC-MDPs, in which one would
like to reach a particular set of control states and counter value zero with probability
arbitrarily close to 1
Zero-Reachability in Probabilistic Multi-Counter Automata
We study the qualitative and quantitative zero-reachability problem in
probabilistic multi-counter systems. We identify the undecidable variants of
the problems, and then we concentrate on the remaining two cases. In the first
case, when we are interested in the probability of all runs that visit zero in
some counter, we show that the qualitative zero-reachability is decidable in
time which is polynomial in the size of a given pMC and doubly exponential in
the number of counters. Further, we show that the probability of all
zero-reaching runs can be effectively approximated up to an arbitrarily small
given error epsilon > 0 in time which is polynomial in log(epsilon),
exponential in the size of a given pMC, and doubly exponential in the number of
counters. In the second case, we are interested in the probability of all runs
that visit zero in some counter different from the last counter. Here we show
that the qualitative zero-reachability is decidable and SquareRootSum-hard, and
the probability of all zero-reaching runs can be effectively approximated up to
an arbitrarily small given error epsilon > 0 (these result applies to pMC
satisfying a suitable technical condition that can be verified in polynomial
time). The proof techniques invented in the second case allow to construct
counterexamples for some classical results about ergodicity in stochastic Petri
nets.Comment: 20 page
Taming denumerable Markov decision processes with decisiveness
Decisiveness has proven to be an elegant concept for denumerable Markov
chains: it is general enough to encompass several natural classes of
denumerable Markov chains, and is a sufficient condition for simple qualitative
and approximate quantitative model checking algorithms to exist. In this paper,
we explore how to extend the notion of decisiveness to Markov decision
processes. Compared to Markov chains, the extra non-determinism can be resolved
in an adversarial or cooperative way, yielding two natural notions of
decisiveness. We then explore whether these notions yield model checking
procedures concerning the infimum and supremum probabilities of reachability
properties
Simple Stochastic Games with Almost-Sure Energy-Parity Objectives are in NP and coNP
We study stochastic games with energy-parity objectives, which combine
quantitative rewards with a qualitative -regular condition: The
maximizer aims to avoid running out of energy while simultaneously satisfying a
parity condition. We show that the corresponding almost-sure problem, i.e.,
checking whether there exists a maximizer strategy that achieves the
energy-parity objective with probability when starting at a given energy
level , is decidable and in . The same holds for checking if
such a exists and if a given is minimal
Strategy Complexity of Threshold Payoff with Applications to Optimal Expected Payoff
We study countably infinite Markov decision processes (MDPs) with transition
rewards. The (resp. ) threshold objective is to maximize the
probability that the (resp. ) of the infinite sequence of
transition rewards is non-negative. We establish the complete picture of the
strategy complexity of these objectives, i.e., the upper and lower bounds on
the memory required by -optimal (resp. optimal) strategies. We
then apply these results to solve two open problems from [Sudderth, Decisions
in Economics and Finance, 2020] about the strategy complexity of optimal
strategies for the expected (resp. ) payoff.Comment: 53 page
Parity Objectives in Countable MDPs
We study countably infinite MDPs with parity objectives, and special cases with a bounded number of colors in the Mostowski hierarchy (including reachability, safety, BĂŒchi and co-BĂŒchi). In finite MDPs there always exist optimal memoryless deterministic (MD) strategies for parity objectives, but this does not generally hold for countably infinite MDPs. In particular, optimal strategies need not exist. For countable infinite MDPs, we provide a complete picture of the memory requirements of optimal (resp., c-optimal) strategies for all objectives in the Mostowski hierarchy. In particular, there is a strong dichotomy between two different types of objectives. For the first type, optimal strategies, if they exist, can be chosen MD, while for the second type optimal strategies require infinite memory. (I.e., for all objectives in the Mostowski hierarchy, if finite-memory randomized strategies suffice then also MD-strategies suffice.) Similarly, some objectives admit c-optimal MD-strategies, while for others c-optimal strategies require infinite memory. Such a dichotomy also holds for the subclass of countably infinite MDPs that are finitely branching, though more objectives admit MD-strategies here
MDPs with Energy-Parity Objectives
Energy-parity objectives combine -regular with quantitative
objectives of reward MDPs. The controller needs to avoid to run out of energy
while satisfying a parity objective.
We refute the common belief that, if an energy-parity objective holds
almost-surely, then this can be realised by some finite memory strategy. We
provide a surprisingly simple counterexample that only uses coB\"uchi
conditions.
We introduce the new class of bounded (energy) storage objectives that, when
combined with parity objectives, preserve the finite memory property. Based on
these, we show that almost-sure and limit-sure energy-parity objectives, as
well as almost-sure and limit-sure storage parity objectives, are in
and can be solved in pseudo-polynomial time for
energy-parity MDPs
Model Checking Population Protocols
Population protocols are a model for parameterized systems in which a set of identical, anonymous, finite-state processes interact pairwise through rendezvous synchronization. In each step, the pair of interacting processes is chosen by a random scheduler. Angluin et al. (PODC 2004) studied population protocols as a distributed computation model. They characterized the computational power in the limit (semi-linear predicates) of a subclass of protocols (the well-specified ones). However, the modeling power of protocols go beyond computation of semi-linear predicates and they can be used to study a wide range of distributed protocols, such as asynchronous leader election or consensus, stochastic evolutionary processes, or chemical reaction networks. Correspondingly, one is interested in checking specifications on these protocols that go beyond the well-specified computation of predicates.
In this paper, we characterize the decidability frontier for the model checking problem for population protocols against probabilistic linear-time specifications. We show that the model checking problem is decidable for qualitative objectives, but as hard as the reachability problem for Petri nets - a well-known hard problem without known elementary algorithms. On the other hand, model checking is undecidable for quantitative properties