4 research outputs found
Expectations or Guarantees? I Want It All! A crossroad between games and MDPs
When reasoning about the strategic capabilities of an agent, it is important
to consider the nature of its adversaries. In the particular context of
controller synthesis for quantitative specifications, the usual problem is to
devise a strategy for a reactive system which yields some desired performance,
taking into account the possible impact of the environment of the system. There
are at least two ways to look at this environment. In the classical analysis of
two-player quantitative games, the environment is purely antagonistic and the
problem is to provide strict performance guarantees. In Markov decision
processes, the environment is seen as purely stochastic: the aim is then to
optimize the expected payoff, with no guarantee on individual outcomes.
In this expository work, we report on recent results introducing the beyond
worst-case synthesis problem, which is to construct strategies that guarantee
some quantitative requirement in the worst-case while providing an higher
expected value against a particular stochastic model of the environment given
as input. This problem is relevant to produce system controllers that provide
nice expected performance in the everyday situation while ensuring a strict
(but relaxed) performance threshold even in the event of very bad (while
unlikely) circumstances. It has been studied for both the mean-payoff and the
shortest path quantitative measures.Comment: In Proceedings SR 2014, arXiv:1404.041
Symblicit Exploration and Elimination for Probabilistic Model Checking
Binary decision diagrams can compactly represent vast sets of states,
mitigating the state space explosion problem in model checking. Probabilistic
systems, however, require multi-terminal diagrams storing rational numbers.
They are inefficient for models with many distinct probabilities and for
iterative numeric algorithms like value iteration. In this paper, we present a
new "symblicit" approach to checking Markov chains and related probabilistic
models: We first generate a decision diagram that symbolically collects all
reachable states and their predecessors. We then concretise states one-by-one
into an explicit partial state space representation. Whenever all predecessors
of a state have been concretised, we eliminate it from the explicit state space
in a way that preserves all relevant probabilities and rewards. We thus keep
few explicit states in memory at any time. Experiments show that very large
models can be model-checked in this way with very low memory consumption