28,460 research outputs found
Quantitative Verification: Formal Guarantees for Timeliness, Reliability and Performance
Computerised systems appear in almost all aspects of our daily lives, often in safety-critical scenarios such as embedded control systems in cars and aircraft
or medical devices such as pacemakers and sensors. We are thus increasingly reliant on these systems working correctly, despite often operating in unpredictable or unreliable environments. Designers of such devices need ways to guarantee that they will operate in a reliable and efficient manner.
Quantitative verification is a technique for analysing quantitative aspects of a system's design, such as timeliness, reliability or performance. It applies formal methods, based on a rigorous analysis of a mathematical model of the system, to automatically prove certain precisely specified properties, e.g. ``the airbag will always deploy within 20 milliseconds after a crash'' or ``the probability of both sensors failing simultaneously is less than 0.001''.
The ability to formally guarantee quantitative properties of this kind is beneficial across a wide range of application domains. For example, in safety-critical systems, it may be essential to establish credible bounds on the probability with which certain failures or combinations of failures can occur. In embedded control systems, it is often important to comply with strict constraints on timing or resources. More generally, being able to derive guarantees on precisely specified levels of performance or efficiency is a valuable tool in the design of, for example, wireless networking protocols, robotic systems or power management algorithms, to name but a few.
This report gives a short introduction to quantitative verification, focusing in particular on a widely used technique called model checking, and its generalisation to the analysis of quantitative aspects of a system such as timing, probabilistic behaviour or resource usage.
The intended audience is industrial designers and developers of systems such as those highlighted above who could benefit from the application of quantitative verification,but lack expertise in formal verification or modelling
On Formal Methods for Collective Adaptive System Engineering. {Scalable Approximated, Spatial} Analysis Techniques. Extended Abstract
In this extended abstract a view on the role of Formal Methods in System
Engineering is briefly presented. Then two examples of useful analysis
techniques based on solid mathematical theories are discussed as well as the
software tools which have been built for supporting such techniques. The first
technique is Scalable Approximated Population DTMC Model-checking. The second
one is Spatial Model-checking for Closure Spaces. Both techniques have been
developed in the context of the EU funded project QUANTICOL.Comment: In Proceedings FORECAST 2016, arXiv:1607.0200
Formal Verification of Probabilistic SystemC Models with Statistical Model Checking
Transaction-level modeling with SystemC has been very successful in
describing the behavior of embedded systems by providing high-level executable
models, in which many of them have inherent probabilistic behaviors, e.g.,
random data and unreliable components. It thus is crucial to have both
quantitative and qualitative analysis of the probabilities of system
properties. Such analysis can be conducted by constructing a formal model of
the system under verification and using Probabilistic Model Checking (PMC).
However, this method is infeasible for large systems, due to the state space
explosion. In this article, we demonstrate the successful use of Statistical
Model Checking (SMC) to carry out such analysis directly from large SystemC
models and allow designers to express a wide range of useful properties. The
first contribution of this work is a framework to verify properties expressed
in Bounded Linear Temporal Logic (BLTL) for SystemC models with both timed and
probabilistic characteristics. Second, the framework allows users to expose a
rich set of user-code primitives as atomic propositions in BLTL. Moreover,
users can define their own fine-grained time resolution rather than the
boundary of clock cycles in the SystemC simulation. The third contribution is
an implementation of a statistical model checker. It contains an automatic
monitor generation for producing execution traces of the
model-under-verification (MUV), the mechanism for automatically instrumenting
the MUV, and the interaction with statistical model checking algorithms.Comment: Journal of Software: Evolution and Process. Wiley, 2017. arXiv admin
note: substantial text overlap with arXiv:1507.0818
Recommended from our members
BioScript: programming safe chemistry on laboratories-on-a-chip
This paper introduces BioScript, a domain-specific language (DSL) for programmable biochemistry which executes on emerging microfluidic platforms. The goal of this research is to provide a simple, intuitive, and type-safe DSL that is accessible to life science practitioners. The novel feature of the language is its syntax, which aims to optimize human readability; the technical contributions of the paper include the BioScript type system and relevant portions of its compiler. The type system ensures that certain types of errors, specific to biochemistry, do not occur, including the interaction of chemicals that may be unsafe. The compiler includes novel optimizations that place biochemical operations to execute concurrently on a spatial 2D array platform on the granularity of a control flow graph, as opposed to individual basic blocks. Results are obtained using both a cycle-accurate microfluidic simulator and a software interface to a real-world platform
Fault diagnostic instrumentation design for environmental control and life support systems
As a development phase moves toward flight hardware, the system availability becomes an important design aspect which requires high reliability and maintainability. As part of continous development efforts, a program to evaluate, design, and demonstrate advanced instrumentation fault diagnostics was successfully completed. Fault tolerance designs for reliability and other instrumenation capabilities to increase maintainability were evaluated and studied
Fault-tolerant computer study
A set of building block circuits is described which can be used with commercially available microprocessors and memories to implement fault tolerant distributed computer systems. Each building block circuit is intended for VLSI implementation as a single chip. Several building blocks and associated processor and memory chips form a self checking computer module with self contained input output and interfaces to redundant communications buses. Fault tolerance is achieved by connecting self checking computer modules into a redundant network in which backup buses and computer modules are provided to circumvent failures. The requirements and design methodology which led to the definition of the building block circuits are discussed
- …