892 research outputs found

    Towards Practical Graph-Based Verification for an Object-Oriented Concurrency Model

    Get PDF
    To harness the power of multi-core and distributed platforms, and to make the development of concurrent software more accessible to software engineers, different object-oriented concurrency models such as SCOOP have been proposed. Despite the practical importance of analysing SCOOP programs, there are currently no general verification approaches that operate directly on program code without additional annotations. One reason for this is the multitude of partially conflicting semantic formalisations for SCOOP (either in theory or by-implementation). Here, we propose a simple graph transformation system (GTS) based run-time semantics for SCOOP that grasps the most common features of all known semantics of the language. This run-time model is implemented in the state-of-the-art GTS tool GROOVE, which allows us to simulate, analyse, and verify a subset of SCOOP programs with respect to deadlocks and other behavioural properties. Besides proposing the first approach to verify SCOOP programs by automatic translation to GTS, we also highlight our experiences of applying GTS (and especially GROOVE) for specifying semantics in the form of a run-time model, which should be transferable to GTS models for other concurrent languages and libraries.Comment: In Proceedings GaM 2015, arXiv:1504.0244

    Modeling a distributed Heterogeneous Communication System using Parametric Timed Automata

    Get PDF
    In this report, we study the application of the Parametric Timed Automata(PTA) tool to a concrete case of a distributed Heterogeneous Communication System (HCS). The description and requirements of HCS are presented and the system modeling is explained carefully. The system models are developed in UPPAAL and validated by different test cases. Part of the system models are then converted into parametric timed automata and the schedulability checking is run to produce the schedulability regions

    A Holistic Approach in Embedded System Development

    Full text link
    We present pState, a tool for developing "complex" embedded systems by integrating validation into the design process. The goal is to reduce validation time. To this end, qualitative and quantitative properties are specified in system models expressed as pCharts, an extended version of hierarchical state machines. These properties are specified in an intuitive way such that they can be written by engineers who are domain experts, without needing to be familiar with temporal logic. From the system model, executable code that preserves the verified properties is generated. The design is documented on the model and the documentation is passed as comments into the generated code. On the series of examples we illustrate how models and properties are specified using pState.Comment: In Proceedings F-IDE 2015, arXiv:1508.0338

    A Multi-Core Solver for Parity Games

    Get PDF
    We describe a parallel algorithm for solving parity games,\ud with applications in, e.g., modal mu-calculus model\ud checking with arbitrary alternations, and (branching) bisimulation\ud checking. The algorithm is based on Jurdzinski's Small Progress\ud Measures. Actually, this is a class of algorithms, depending on\ud a selection heuristics.\ud \ud Our algorithm operates lock-free, and mostly wait-free (except for\ud infrequent termination detection), and thus allows maximum\ud parallelism. Additionally, we conserve memory by avoiding storage\ud of predecessor edges for the parity graph through strictly\ud forward-looking heuristics.\ud \ud We evaluate our multi-core implementation's behaviour on parity games\ud obtained from mu-calculus model checking problems for a set of\ud communication protocols, randomly generated problem instances, and\ud parametric problem instances from the literature.\ud \u

    Towards the Correctness of Software Behavior in UML: A Model Checking Approach Based on Slicing

    Get PDF
    Embedded systems are systems which have ongoing interactions with their environments, accepting requests and producing responses. Such systems are increasingly used in applications where failure is unacceptable: traffic control systems, avionics, automobiles, etc. Correct and highly dependable construction of such systems is particularly important and challenging. A very promising and increasingly attractive method for achieving this goal is using the approach of formal verification. A formal verification method consists of three major components: a model for describing the behavior of the system, a specification language to embody correctness requirements, and an analysis method to verify the behavior against the correctness requirements. This Ph.D. addresses the correctness of the behavioral design of embedded systems, using model checking as the verification technology. More precisely, we present an UML-based verification method that checks whether the conditions on the evolution of the embedded system are met by the model. Unfortunately, model checking is limited to medium size systems because of its high space requirements. To overcome this problem, this Ph.D. suggests the integration of the slicing (reduction) technique

    Schedulability analysis of timed CSP models using the PAT model checker

    Get PDF
    Timed CSP can be used to model and analyse real-time and concurrent behaviour of embedded control systems. Practical CSP implementations combine the CSP model of a real-time control system with prioritized scheduling to achieve efficient and orderly use of limited resources. Schedulability analysis of a timed CSP model of a system with respect to a scheduling scheme and a particular execution platform is important to ensure that the system design satisfies its timing requirements. In this paper, we propose a framework to analyse schedulability of CSP-based designs for non-preemptive fixed-priority multiprocessor scheduling. The framework is based on the PAT model checker and the analysis is done with dense-time model checking on timed CSP models. We also provide a schedulability analysis workflow to construct and analyse, using the proposed framework, a timed CSP model with scheduling from an initial untimed CSP model without scheduling. We demonstrate our schedulability analysis workflow on a case study of control software design for a mobile robot. The proposed approach provides non-pessimistic schedulability results

    New results on rewrite-based satisfiability procedures

    Full text link
    Program analysis and verification require decision procedures to reason on theories of data structures. Many problems can be reduced to the satisfiability of sets of ground literals in theory T. If a sound and complete inference system for first-order logic is guaranteed to terminate on T-satisfiability problems, any theorem-proving strategy with that system and a fair search plan is a T-satisfiability procedure. We prove termination of a rewrite-based first-order engine on the theories of records, integer offsets, integer offsets modulo and lists. We give a modularity theorem stating sufficient conditions for termination on a combinations of theories, given termination on each. The above theories, as well as others, satisfy these conditions. We introduce several sets of benchmarks on these theories and their combinations, including both parametric synthetic benchmarks to test scalability, and real-world problems to test performances on huge sets of literals. We compare the rewrite-based theorem prover E with the validity checkers CVC and CVC Lite. Contrary to the folklore that a general-purpose prover cannot compete with reasoners with built-in theories, the experiments are overall favorable to the theorem prover, showing that not only the rewriting approach is elegant and conceptually simple, but has important practical implications.Comment: To appear in the ACM Transactions on Computational Logic, 49 page

    Predicting Memory Demands of BDD Operations using Maximum Graph Cuts (Extended Paper)

    Full text link
    The BDD package Adiar manipulates Binary Decision Diagrams (BDDs) in external memory. This enables handling big BDDs, but the performance suffers when dealing with moderate-sized BDDs. This is mostly due to initializing expensive external memory data structures, even if their contents can fit entirely inside internal memory. The contents of these auxiliary data structures always correspond to a graph cut in an input or output BDD. Specifically, these cuts respect the levels of the BDD. We formalise the shape of these cuts and prove sound upper bounds on their maximum size for each BDD operation. We have implemented these upper bounds within Adiar. With these bounds, it can predict whether a faster internal memory variant of the auxiliary data structures can be used. In practice, this improves Adiar's running time across the board. Specifically for the moderate-sized BDDs, this results in an average reduction of the computation time by 86.1% (median of 89.7%). In some cases, the difference is even 99.9\%. When checking equivalence of hardware circuits from the EPFL Benchmark Suite, for one of the instances the time was decreased by 52 hours.Comment: 25 pages, 11 Figures, 2 Tables. Extended version of paper published at ATVA 202
    • 

    corecore