147 research outputs found

    Construction and Verification of Performance and Reliability Models

    Get PDF
    Over the last two decades formal methods have been extended towards performance and reliability evaluation. This paper tries to provide a rather intuitive explanation of the basic concepts and features in this area. Instead of striving for mathematical rigour, the intention is to give an illustrative introduction to the basics of stochastic models, to stochastic modelling using process algebra, and to model checking as a technique to analyse stochastic models

    Analysing the Performance of GPU Hash Tables for State Space Exploration

    Get PDF
    In the past few years, General Purpose Graphics Processors (GPUs) have been used to significantly speed up numerous applications. One of the areas in which GPUs have recently led to a significant speed-up is model checking. In model checking, state spaces, i.e., large directed graphs, are explored to verify whether models satisfy desirable properties. GPUexplore is a GPU-based model checker that uses a hash table to efficiently keep track of already explored states. As a large number of states is discovered and stored during such an exploration, the hash table should be able to quickly handle many inserts and queries concurrently. In this paper, we experimentally compare two different hash tables optimised for the GPU, one being the GPUexplore hash table, and the other using Cuckoo hashing. We compare the performance of both hash tables using random and non-random data obtained from model checking experiments, to analyse the applicability of the two hash tables for state space exploration. We conclude that Cuckoo hashing is three times faster than GPUexplore hashing for random data, and that Cuckoo hashing is five to nine times faster for non-random data. This suggests great potential to further speed up GPUexplore in the near future.Comment: In Proceedings GaM 2017, arXiv:1712.0834

    Is CADP an Applicable Formal Method?

    Get PDF
    International audienceCADP is a comprehensive toolbox implementing results of concurrency theory. This paper addresses the question, whether CADP qualifies as an applicable formal method, based on the experience of the authors and feedback reported by users

    Quantifying the Parallelism in BPMN Processes using Model Checking

    No full text
    International audienceA business process is a set of structured, related activities that aims at fulfilling a specific organizational goal for a customer or market. An important metric when developing a business process is its degree of parallelism, i.e., the maximum number of tasks that are executable in parallel in that process. The degree of parallelism determines the peak demand on tasks, providing a valuable guide for the problem of resource allocation in business processes. In this paper, we investigate how to automatically measure the degree of parallelism for business processes, described using the BPMN standard notation. We first present a formal model for BPMN processes in terms of Labelled Transition Systems, which are obtained through process algebra encodings. We then propose an approach for automatically computing the degree of parallelism by using model checking techniques and dichotomic search. We implemented a tool for automating this check and we applied it successfully to more than one hundred BPMN processes

    CADP: A Toolbox for the Construction and Analysis of Distributed Processes

    Get PDF
    International audienceAsynchronous concurrency is becoming increasingly present in a large spectrum of systems, spanning from the level of systems- and networks-on-chip, over multi-processor architectures, up to the level of grid and cloud computing. Due to the intrinsic complexity of asynchronous concurrency, the correct design of such systems is notoriously difficult, requiring the support of formal methods and verification tools. CADP (Construction and Analysis of Distributed Processes) is a toolbox for the design, functional verification, and performance evaluation of asynchronous concurrent systems. Currently, CADP consists of about fifty interconnected tools and software libraries. The toolbox is distributed free of charge to academia and public research institutes, and is already used by more than 440 research institutions and companies worldwide in many application domains. Given the increasing number of systems featuring asynchronous concurrency, CADP could be used still more widely in research, industry, and education (in particular for teaching the concepts of concurrency theory). This tutorial presents the architecture and main functionalities of CADP, with a twofold objective. On the one hand, the tutorial illustrates the application of CADP to the modeling, functional verification, and performance evaluation. On the other hand, the tutorial presents various input languages accepted as input by CADP, together with software libraries that enable users to develop their own analysis tools. The well-known, but fundamental problem of mutual exclusion will serve as support to illustrate the principal functionalities of CADP: formal modeling of protocols, compositional state space generation, graph visualization, interactive step-by-step simulation, formulation and verification of temporal logic properties, as well as performance evaluation by compositional insertion of latency constraints and transformation into interactive Markov chains

    On the Scalability of the GPUexplore Explicit-State Model Checker

    Get PDF
    The use of graphics processors (GPUs) is a promising approach to speed up model checking to such an extent that it becomes feasible to instantly verify software systems during development. GPUexplore is an explicit-state model checker that runs all its computations on the GPU. Over the years it has been extended with various techniques, and the possibilities to further improve its performance have been continuously investigated. In this paper, we discuss how the hash table of the tool works, which is at the heart of its functionality. We propose an alteration of the hash table that in isolated experiments seems promising, and analyse its effect when integrated in the tool. Furthermore, we investigate the current scalability of GPUexplore, by experimenting both with input models of varying sizes and running the tool on one of the latest GPUs of NVIDIA.Comment: In Proceedings GaM 2017, arXiv:1712.0834

    Quantitative Timed Analysis of Interactive Markov Chains

    Get PDF
    Abstract This paper presents new algorithms and accompanying tool support for analyzing interactive Markov chains (IMCs), a stochastic timed 1 1 2-player game in which delays are exponentially distributed. IMCs are compositional and act as semantic model for engineering for-malisms such as AADL and dynamic fault trees. We provide algorithms for determining the extremal expected time of reaching a set of states, and the long-run average of time spent in a set of states. The prototypical tool Imca supports these algorithms as well as the synthesis of ε-optimal piecewise constant timed policies for timed reachability objectives. Two case studies show the feasibility and scalability of the algorithms.

    Quantifying the Parallelism in BPMN Processes using Model Checking

    Get PDF
    International audienceA business process is a set of structured, related activities that aims at fulfilling a specific organizational goal for a customer or market. An important metric when developing a business process is its degree of parallelism, i.e., the maximum number of tasks that are executable in parallel in that process. The degree of parallelism determines the peak demand on tasks, providing a valuable guide for the problem of resource allocation in business processes. In this paper, we investigate how to automatically measure the degree of parallelism for business processes, described using the BPMN standard notation. We first present a formal model for BPMN processes in terms of Labelled Transition Systems, which are obtained through process algebra encodings. We then propose an approach for automatically computing the degree of parallelism by using model checking techniques and dichotomic search. We implemented a tool for automating this check and we applied it successfully to more than one hundred BPMN processes

    Formal methods for functional verification of cache-coherent systems-on-chip

    Get PDF
    State-of-the-art System-on-Chip (SoC) architectures integrate many different components, such as processors, accelerators, memories, and I/O blocks. Some of those components, but not all, may have caches. Because the effort of validation with simulation-based techniques, currently used in industry, grows exponentially with the complexity of the SoC, this thesis investigates the use of formal verification techniques in this context. More precisely, we use the CADP toolbox to develop and validate a generic formal model of a heterogeneous cache-coherent SoC compliant with the recent AMBA 4 ACE specification proposed by ARM. We use a constraint-oriented specification style to model the general requirements of the specification. We verify system properties on both the constrained and unconstrained model to detect the cache coherency corner cases. We take advantage of the parametrization of the proposed model to produce a comprehensive set of counterexamples of non-satisfied properties in the unconstrained model. The results of formal verification are then used to improve the industrial simulation-based verification techniques in two aspects. On the one hand, we suggest using the formal model to assess the sanity of an interface verification unit. On the other hand, in order to generate clever semi-directed test cases from temporal logic properties, we propose a two-step approach. One step consists in generating system-level abstract test cases using model-based testing tools of the CADP toolbox. The other step consists in refining those tests into interface-level concrete test cases that can be executed at RTL level with a commercial Coverage-Directed Test Generation tool. We found that our approach helps in the transition between interface-level and system-level verification, facilitates the validation of system-level properties, and enables early detection of bugs in both the SoC and the commercial test-bench.Les architectures des systèmes sur puce (System-on-Chip, SoC) actuelles intègrent de nombreux composants différents tels que les processeurs, les accélérateurs, les mémoires et les blocs d'entrée/sortie, certains pouvant contenir des caches. Vu que l'effort de validation basée sur la simulation, actuellement utilisée dans l'industrie, croît de façon exponentielle avec la complexité des SoCs, nous nous intéressons à des techniques de vérification formelle. Nous utilisons la boîte à outils CADP pour développer et valider un modèle formel d'un SoC générique conforme à la spécification AMBA 4 ACE récemment proposée par ARM dans le but de mettre en œuvre la cohérence de cache au niveau système. Nous utilisons une spécification orientée contraintes pour modéliser les exigences générales de cette spécification. Les propriétés du système sont vérifié à la fois sur le modèle avec contraintes et le modèle sans contraintes pour détecter les cas intéressants pour la cohérence de cache. La paramétrisation du modèle proposé a permis de produire l'ensemble complet des contre-exemples qui ne satisfont pas une certaine propriété dans le modèle non contraint. Notre approche améliore les techniques industrielles de vérification basées sur la simulation en deux aspects. D'une part, nous suggérons l'utilisation du modèle formel pour évaluer la bonne construction d'une unité de vérification d'interface. D'autre part, dans l'objectif de générer des cas de test semi-dirigés intelligents à partir des propriétés de logique temporelle, nous proposons une approche en deux étapes. La première étape consiste à générer des cas de tests abstraits au niveau système en utilisant des outils de test basé sur modèle de la boîte à outils CADP. La seconde étape consiste à affiner ces tests en cas de tests concrets au niveau de l'interface qui peuvent être exécutés en RTL grâce aux services d'un outil commercial de génération de tests dirigés par les mesures de couverture. Nous avons constaté que notre approche participe dans la transition entre la vérification du niveau interface, classiquement pratiquée dans l'industrie du matériel, et la vérification au niveau système. Notre approche facilite aussi la validation des propriétés globales du système, et permet une détection précoce des bugs, tant dans le SoC que dans les bancs de test commerciales
    corecore