60 research outputs found
On Quantitative Software Verification
International audienceSoftware verification has made great progress in recent years, resulting in several tools capable of working directly from source code, for example, SLAM and Astree. Typical properties that can be verified are expressed as Boolean assertions or temporal logic properties, and include whether the program eventually terminates, or the executions never violate a safety property. The underlying techniques crucially rely on the ability to extract from programs, using compiler tools and predicate abstraction, finite-state abstract models, which are then iteratively refined to either demonstrate the violation of a safety property (e.g. a buffer overflow) or guarantee the absence of such faults. An established method to achieve this automatically executes an abstraction-refinement loop guided by counterexample traces [1]
Statistical Model Checking : An Overview
Quantitative properties of stochastic systems are usually specified in logics
that allow one to compare the measure of executions satisfying certain temporal
properties with thresholds. The model checking problem for stochastic systems
with respect to such logics is typically solved by a numerical approach that
iteratively computes (or approximates) the exact measure of paths satisfying
relevant subformulas; the algorithms themselves depend on the class of systems
being analyzed as well as the logic used for specifying the properties. Another
approach to solve the model checking problem is to \emph{simulate} the system
for finitely many runs, and use \emph{hypothesis testing} to infer whether the
samples provide a \emph{statistical} evidence for the satisfaction or violation
of the specification. In this short paper, we survey the statistical approach,
and outline its main advantages in terms of efficiency, uniformity, and
simplicity.Comment: non
Automatic Probabilistic Program Verification through Random Variable Abstraction
The weakest pre-expectation calculus has been proved to be a mature theory to
analyze quantitative properties of probabilistic and nondeterministic programs.
We present an automatic method for proving quantitative linear properties on
any denumerable state space using iterative backwards fixed point calculation
in the general framework of abstract interpretation. In order to accomplish
this task we present the technique of random variable abstraction (RVA) and we
also postulate a sufficient condition to achieve exact fixed point computation
in the abstract domain. The feasibility of our approach is shown with two
examples, one obtaining the expected running time of a probabilistic program,
and the other the expected gain of a gambling strategy.
Our method works on general guarded probabilistic and nondeterministic
transition systems instead of plain pGCL programs, allowing us to easily model
a wide range of systems including distributed ones and unstructured programs.
We present the operational and weakest precondition semantics for this programs
and prove its equivalence
An expectation transformer approach to predicate abstraction and data independence for probabilistic programs
In this paper we revisit the well-known technique of predicate abstraction to
characterise performance attributes of system models incorporating probability.
We recast the theory using expectation transformers, and identify transformer
properties which correspond to abstractions that yield nevertheless exact bound
on the performance of infinite state probabilistic systems. In addition, we
extend the developed technique to the special case of "data independent"
programs incorporating probability. Finally, we demonstrate the subtleness of
the extended technique by using the PRISM model checking tool to analyse an
infinite state protocol, obtaining exact bounds on its performance
Explicit Model Checking of Very Large MDP using Partitioning and Secondary Storage
The applicability of model checking is hindered by the state space explosion
problem in combination with limited amounts of main memory. To extend its
reach, the large available capacities of secondary storage such as hard disks
can be exploited. Due to the specific performance characteristics of secondary
storage technologies, specialised algorithms are required. In this paper, we
present a technique to use secondary storage for probabilistic model checking
of Markov decision processes. It combines state space exploration based on
partitioning with a block-iterative variant of value iteration over the same
partitions for the analysis of probabilistic reachability and expected-reward
properties. A sparse matrix-like representation is used to store partitions on
secondary storage in a compact format. All file accesses are sequential, and
compression can be used without affecting runtime. The technique has been
implemented within the Modest Toolset. We evaluate its performance on several
benchmark models of up to 3.5 billion states. In the analysis of time-bounded
properties on real-time models, our method neutralises the state space
explosion induced by the time bound in its entirety.Comment: The final publication is available at Springer via
http://dx.doi.org/10.1007/978-3-319-24953-7_1
Probabilistic Model-Based Safety Analysis
Model-based safety analysis approaches aim at finding critical failure
combinations by analysis of models of the whole system (i.e. software,
hardware, failure modes and environment). The advantage of these methods
compared to traditional approaches is that the analysis of the whole system
gives more precise results. Only few model-based approaches have been applied
to answer quantitative questions in safety analysis, often limited to analysis
of specific failure propagation models, limited types of failure modes or
without system dynamics and behavior, as direct quantitative analysis is uses
large amounts of computing resources. New achievements in the domain of
(probabilistic) model-checking now allow for overcoming this problem.
This paper shows how functional models based on synchronous parallel
semantics, which can be used for system design, implementation and qualitative
safety analysis, can be directly re-used for (model-based) quantitative safety
analysis. Accurate modeling of different types of probabilistic failure
occurrence is shown as well as accurate interpretation of the results of the
analysis. This allows for reliable and expressive assessment of the safety of a
system in early design stages
Dependability Analysis of Control Systems using SystemC and Statistical Model Checking
Stochastic Petri nets are commonly used for modeling distributed systems in
order to study their performance and dependability. This paper proposes a
realization of stochastic Petri nets in SystemC for modeling large embedded
control systems. Then statistical model checking is used to analyze the
dependability of the constructed model. Our verification framework allows users
to express a wide range of useful properties to be verified which is
illustrated through a case study
The COMICS Tool - Computing Minimal Counterexamples for Discrete-time Markov Chains
This report presents the tool COMICS, which performs model checking and
generates counterexamples for DTMCs. For an input DTMC, COMICS computes an
abstract system that carries the model checking information and uses this
result to compute a critical subsystem, which induces a counterexample. This
abstract subsystem can be refined and concretized hierarchically. The tool
comes with a command-line version as well as a graphical user interface that
allows the user to interactively influence the refinement process of the
counterexample
- …