9,309 research outputs found
Anytime system level verification via parallel random exhaustive hardware in the loop simulation
System level verification of cyber-physical systems has the goal of verifying that the whole (i.e., software + hardware) system meets the given specifications. Model checkers for hybrid systems cannot handle system level verification of actual systems. Thus, Hardware In the Loop Simulation (HILS) is currently the main workhorse for system level verification. By using model checking driven exhaustive HILS, System Level Formal Verification (SLFV) can be effectively carried out for actual systems.
We present a parallel random exhaustive HILS based model checker for hybrid systems that, by simulating all operational scenarios exactly once in a uniform random order, is able to provide, at any time during the verification process, an upper bound to the probability that the System Under Verification exhibits an error in a yet-to-be-simulated scenario (Omission Probability).
We show effectiveness of the proposed approach by presenting experimental results on SLFV of the Inverted Pendulum on a Cart and the Fuel Control System examples in the Simulink distribution. To the best of our knowledge, no previously published model checker can exhaustively verify hybrid systems of such a size and provide at any time an upper bound to the Omission Probability
Simulator Semantics for System Level Formal Verification
Many simulation based Bounded Model Checking approaches to System Level
Formal Verification (SLFV) have been devised. Typically such approaches exploit
the capability of simulators to save computation time by saving and restoring
the state of the system under simulation. However, even though such approaches
aim to (bounded) formal verification, as a matter of fact, the simulator
behaviour is not formally modelled and the proof of correctness of the proposed
approaches basically relies on the intuitive notion of simulator behaviour.
This gap makes it hard to check if the optimisations introduced to speed up the
simulation do not actually omit checking relevant behaviours of the system
under verification.
The aim of this paper is to fill the above gap by presenting a formal
semantics for simulators.Comment: In Proceedings GandALF 2015, arXiv:1509.0685
SOTER: A Runtime Assurance Framework for Programming Safe Robotics Systems
The recent drive towards achieving greater autonomy and intelligence in
robotics has led to high levels of complexity. Autonomous robots increasingly
depend on third party off-the-shelf components and complex machine-learning
techniques. This trend makes it challenging to provide strong design-time
certification of correct operation.
To address these challenges, we present SOTER, a robotics programming
framework with two key components: (1) a programming language for implementing
and testing high-level reactive robotics software and (2) an integrated runtime
assurance (RTA) system that helps enable the use of uncertified components,
while still providing safety guarantees. SOTER provides language primitives to
declaratively construct a RTA module consisting of an advanced,
high-performance controller (uncertified), a safe, lower-performance controller
(certified), and the desired safety specification. The framework provides a
formal guarantee that a well-formed RTA module always satisfies the safety
specification, without completely sacrificing performance by using higher
performance uncertified components whenever safe. SOTER allows the complex
robotics software stack to be constructed as a composition of RTA modules,
where each uncertified component is protected using a RTA module.
To demonstrate the efficacy of our framework, we consider a real-world
case-study of building a safe drone surveillance system. Our experiments both
in simulation and on actual drones show that the SOTER-enabled RTA ensures the
safety of the system, including when untrusted third-party components have bugs
or deviate from the desired behavior
Recommended from our members
Using formal methods to support testing
Formal methods and testing are two important approaches that assist in the development of high quality software. While traditionally these approaches have been seen as rivals, in recent
years a new consensus has developed in which they are seen as complementary. This article reviews the state of the art regarding ways in which the presence of a formal specification can be used to assist testing
Time-Staging Enhancement of Hybrid System Falsification
Optimization-based falsification employs stochastic optimization algorithms
to search for error input of hybrid systems. In this paper we introduce a
simple idea to enhance falsification, namely time staging, that allows the
time-causal structure of time-dependent signals to be exploited by the
optimizers. Time staging consists of running a falsification solver multiple
times, from one interval to another, incrementally constructing an input signal
candidate. Our experiments show that time staging can dramatically increase
performance in some realistic examples. We also present theoretical results
that suggest the kinds of models and specifications for which time staging is
likely to be effective
Evaluating Model Testing and Model Checking for Finding Requirements Violations in Simulink Models
Matlab/Simulink is a development and simulation language that is widely used
by the Cyber-Physical System (CPS) industry to model dynamical systems. There
are two mainstream approaches to verify CPS Simulink models: model testing that
attempts to identify failures in models by executing them for a number of
sampled test inputs, and model checking that attempts to exhaustively check the
correctness of models against some given formal properties. In this paper, we
present an industrial Simulink model benchmark, provide a categorization of
different model types in the benchmark, describe the recurring logical patterns
in the model requirements, and discuss the results of applying model checking
and model testing approaches to identify requirements violations in the
benchmarked models. Based on the results, we discuss the strengths and
weaknesses of model testing and model checking. Our results further suggest
that model checking and model testing are complementary and by combining them,
we can significantly enhance the capabilities of each of these approaches
individually. We conclude by providing guidelines as to how the two approaches
can be best applied together.Comment: 10 pages + 2 page reference
COMPARING AUTOMATED UNIT TESTING STRATEGIES
Software testing plays a:critical role in the software development lifecycle. Auto mated unit testing strategies allow a tester to execute a large number of test cases to detect faulty behaviours in a piece of software. Many different automated unit testing strategies can be applied to test a program. In order to better understand the relationship between these strategies, “explorative” strategies are defined as those which select unit tests by exploring a large search space with a relatively simple data structure. This thesis focuses on comparing three particular explorative strategies: bounded-exhaustive, randomized, and a combined strategy. In order to precisely compare these three strategies, a test program is developed to provide a universal framework for generating and executing test cases. The test program implements the three strategies as well. In addition, we perform several experiments on these three strategies using the test program. The experimental data is collected and analyzed to illustrate the relationship between these strategies
- …