218,800 research outputs found

    Contributions to Statistical Model Checking

    Get PDF
    Statistical Model Checking (SMC) is a powerful and widely used approach that consists in estimating the probability for a system to satisfy a temporal property. This is done by monitoring a finite number of executions of the system, and then extrapolating the result by using statistics. The answer is correct up to some confidence that can be parameterized by the user. It is known that SMC mitigates the state-space explosion problem and allows us to handle requirements that cannot be expressed in classical temporal logics. The approach has been implemented in several toolsets, and successfully applied in a wide range of diverse areas such as systems biology, robotic, or automotive. Unfortunately, SMC is not a panacea and many important classes of systems and properties are still out of its scope. Moreover, In addition, SMC still indirectly suffers from an explosion linked to the number of simulations needed to converge when estimating small probabilities. Finally,the approach has not yet been lifted to a professional toolset directly usable by industry people.In this thesis we propose several contributions to increase the efficiency of SMC and to wider its applicability to a larger class of systems. We show how to extend the applicability of SMC to estimate the probability of rare-events. The probability of such events is so small that classical estimators such as Monte Carlo would almost always estimate it to be null. We then show how to apply SMC to those systems that combine both non-deterministic and stochastic aspects. Contrary to existing work, we do not use a learning-based approach for the non-deterministic aspects, butrather exploit a smart sampling strategy. We then show that SMC can be extended to a new class of problems. More precisely, we consider the problem of detecting probability changes at runtime. We solve this problem by exploiting an algorithm coming from the signal processing area. We also propose an extension of SMC to real-time stochastic system. We provide a stochastic semantic for such systems, and show how to exploit it in a simulation-based approach. Finally, we also consider an extension of the approach for Systems of Systems.Our results have been implemented in Plasma Lab, a powerful but flexible toolset. The thesis illustrates the efficiency of this tool on several case studies going from classical verification to more quixotic applications such as robotic

    Bayesian Verification under Model Uncertainty

    Full text link
    Machine learning enables systems to build and update domain models based on runtime observations. In this paper, we study statistical model checking and runtime verification for systems with this ability. Two challenges arise: (1) Models built from limited runtime data yield uncertainty to be dealt with. (2) There is no definition of satisfaction w.r.t. uncertain hypotheses. We propose such a definition of subjective satisfaction based on recently introduced satisfaction functions. We also propose the BV algorithm as a Bayesian solution to runtime verification of subjective satisfaction under model uncertainty. BV provides user-definable stochastic bounds for type I and II errors. We discuss empirical results from an example application to illustrate our ideas.Comment: Accepted at SEsCPS @ ICSE 201

    A Modeling Framework for Schedulability Analysis of Distributed Avionics Systems

    Get PDF
    This paper presents a modeling framework for schedulability analysis of distributed integrated modular avionics (DIMA) systems that consist of spatially distributed ARINC-653 modules connected by a unified AFDX network. We model a DIMA system as a set of stopwatch automata (SWA) in UPPAAL to analyze its schedulability by classical model checking (MC) and statistical model checking (SMC). The framework has been designed to enable three types of analysis: global SMC, global MC, and compositional MC. This allows an effective methodology including (1) quick schedulability falsification using global SMC analysis, (2) direct schedulability proofs using global MC analysis in simple cases, and (3) strict schedulability proofs using compositional MC analysis for larger state space. The framework is applied to the analysis of a concrete DIMA system.Comment: In Proceedings MARS/VPT 2018, arXiv:1803.0866

    Simulator Semantics for System Level Formal Verification

    Get PDF
    Many simulation based Bounded Model Checking approaches to System Level Formal Verification (SLFV) have been devised. Typically such approaches exploit the capability of simulators to save computation time by saving and restoring the state of the system under simulation. However, even though such approaches aim to (bounded) formal verification, as a matter of fact, the simulator behaviour is not formally modelled and the proof of correctness of the proposed approaches basically relies on the intuitive notion of simulator behaviour. This gap makes it hard to check if the optimisations introduced to speed up the simulation do not actually omit checking relevant behaviours of the system under verification. The aim of this paper is to fill the above gap by presenting a formal semantics for simulators.Comment: In Proceedings GandALF 2015, arXiv:1509.0685

    Verification of interlocking systems using statistical model checking

    Get PDF
    In the railway domain, an interlocking is the system ensuring safe train traffic inside a station by controlling its active elements such as the signals or points. Modern interlockings are configured using particular data, called application data, reflecting the track layout and defining the actions that the interlocking can take. The safety of the train traffic relies thereby on application data correctness, errors inside them can cause safety issues such as derailments or collisions. Given the high level of safety required by such a system, its verification is a critical concern. In addition to the safety, an interlocking must also ensure that availability properties, stating that no train would be stopped forever in a station, are satisfied. Most of the research dealing with this verification relies on model checking. However, due to the state space explosion problem, this approach does not scale for large stations. More recently, a discrete event simulation approach limiting the verification to a set of likely scenarios, was proposed. The simulation enables the verification of larger stations, but with no proof that all the interesting scenarios are covered by the simulation. In this paper, we apply an intermediate statistical model checking approach, offering both the advantages of model checking and simulation. Even if exhaustiveness is not obtained, statistical model checking evaluates with a parametrizable confidence the reliability and the availability of the entire system.Comment: 12 pages, 3 figures, 2 table

    Specification and Verification of Distributed Embedded Systems: A Traffic Intersection Product Family

    Full text link
    Distributed embedded systems (DESs) are no longer the exception; they are the rule in many application areas such as avionics, the automotive industry, traffic systems, sensor networks, and medical devices. Formal DES specification and verification is challenging due to state space explosion and the need to support real-time features. This paper reports on an extensive industry-based case study involving a DES product family for a pedestrian and car 4-way traffic intersection in which autonomous devices communicate by asynchronous message passing without a centralized controller. All the safety requirements and a liveness requirement informally specified in the requirements document have been formally verified using Real-Time Maude and its model checking features.Comment: In Proceedings RTRTS 2010, arXiv:1009.398

    On minimising the maximum expected verification time

    Get PDF
    Cyber Physical Systems (CPSs) consist of hardware and software components. To verify that the whole (i.e., software + hardware) system meets the given specifications, exhaustive simulation-based approaches (Hardware In the Loop Simulation, HILS) can be effectively used by first generating all relevant simulation scenarios (i.e., sequences of disturbances) and then actually simulating all of them (verification phase). When considering the whole verification activity, we see that the above mentioned verification phase is repeated until no error is found. Accordingly, in order to minimise the time taken by the whole verification activity, in each verification phase we should, ideally, start by simulating scenarios witnessing errors (counterexamples). Of course, to know beforehand the set of such scenarios is not feasible. In this paper we show how to select scenarios so as to minimise the Worst Case Expected Verification Tim

    Modelling and analyzing adaptive self-assembling strategies with Maude

    Get PDF
    Building adaptive systems with predictable emergent behavior is a challenging task and it is becoming a critical need. The research community has accepted the challenge by introducing approaches of various nature: from software architectures, to programming paradigms, to analysis techniques. We recently proposed a conceptual framework for adaptation centered around the role of control data. In this paper we show that it can be naturally realized in a reflective logical language like Maude by using the Reflective Russian Dolls model. Moreover, we exploit this model to specify and analyse a prominent example of adaptive system: robot swarms equipped with obstacle-avoidance self-assembly strategies. The analysis exploits the statistical model checker PVesta
    • …
    corecore