5 research outputs found
Incremental Verification of Parametric and Reconfigurable Markov Chains
The analysis of parametrised systems is a growing field in verification, but
the analysis of parametrised probabilistic systems is still in its infancy.
This is partly because it is much harder: while there are beautiful cut-off
results for non-stochastic systems that allow to focus only on small instances,
there is little hope that such approaches extend to the quantitative analysis
of probabilistic systems, as the probabilities depend on the size of a system.
The unicorn would be an automatic transformation of a parametrised system into
a formula, which allows to plot, say, the likelihood to reach a goal or the
expected costs to do so, against the parameters of a system. While such
analysis exists for narrow classes of systems, such as waiting queues, we aim
both lower---stepwise exploring the parameter space---and higher---considering
general systems.
The novelty is to heavily exploit the similarity between instances of
parametrised systems. When the parameter grows, the system for the smaller
parameter is, broadly speaking, present in the larger system. We use this
observation to guide the elegant state-elimination method for parametric Markov
chains in such a way, that the model transformations will start with those
parts of the system that are stable under increasing the parameter. We argue
that this can lead to a very cheap iterative way to analyse parametric systems,
show how this approach extends to reconfigurable systems, and demonstrate on
two benchmarks that this approach scales
Symblicit Exploration and Elimination for Probabilistic Model Checking
Binary decision diagrams can compactly represent vast sets of states,
mitigating the state space explosion problem in model checking. Probabilistic
systems, however, require multi-terminal diagrams storing rational numbers.
They are inefficient for models with many distinct probabilities and for
iterative numeric algorithms like value iteration. In this paper, we present a
new "symblicit" approach to checking Markov chains and related probabilistic
models: We first generate a decision diagram that symbolically collects all
reachable states and their predecessors. We then concretise states one-by-one
into an explicit partial state space representation. Whenever all predecessors
of a state have been concretised, we eliminate it from the explicit state space
in a way that preserves all relevant probabilities and rewards. We thus keep
few explicit states in memory at any time. Experiments show that very large
models can be model-checked in this way with very low memory consumption