3 research outputs found
An expectation transformer approach to predicate abstraction and data independence for probabilistic programs
In this paper we revisit the well-known technique of predicate abstraction to
characterise performance attributes of system models incorporating probability.
We recast the theory using expectation transformers, and identify transformer
properties which correspond to abstractions that yield nevertheless exact bound
on the performance of infinite state probabilistic systems. In addition, we
extend the developed technique to the special case of "data independent"
programs incorporating probability. Finally, we demonstrate the subtleness of
the extended technique by using the PRISM model checking tool to analyse an
infinite state protocol, obtaining exact bounds on its performance
Model exploration and analysis for quantitative safety refinement in probabilistic B
The role played by counterexamples in standard system analysis is well known;
but less common is a notion of counterexample in probabilistic systems
refinement. In this paper we extend previous work using counterexamples to
inductive invariant properties of probabilistic systems, demonstrating how they
can be used to extend the technique of bounded model checking-style analysis
for the refinement of quantitative safety specifications in the probabilistic B
language. In particular, we show how the method can be adapted to cope with
refinements incorporating probabilistic loops. Finally, we demonstrate the
technique on pB models summarising a one-step refinement of a randomised
algorithm for finding the minimum cut of undirected graphs, and that for the
dependability analysis of a controller design.Comment: In Proceedings Refine 2011, arXiv:1106.348
Generating counterexamples for quantitative safety specifications in probabilistic B
Probabilistic annotations generalise standard Hoare Logic [20] to quantitative properties of probabilistic programs. They can be used to express critical expected values over program variables that must be maintained during program execution. As for standard program development, probabilistic assertions can be checked mechanically relative to an appropriate program semantics. In the case that a mechanical prover is unable to complete such validity checks then a counterexample to show that the annotation is incorrect can provide useful diagnostic information. In this paper, we provide a definition of counterexamples as failure traces for probabilistic assertions within the context of the pB language [19], an extension of the standard B method [1] to cope with probabilistic programs. In addition, we propose algorithmic techniques to find counterexamples where they exist, and suggest a ranking mechanism to return 'the most useful diagnostic information' to the pB developer to aid the resolution of the problem.20 page(s