11 research outputs found

    BEval: A Plug-in to Extend Atelier B with Current Verification Technologies

    Full text link
    This paper presents BEval, an extension of Atelier B to improve automation in the verification activities in the B method or Event-B. It combines a tool for managing and verifying software projects (Atelier B) and a model checker/animator (ProB) so that the verification conditions generated in the former are evaluated with the latter. In our experiments, the two main verification strategies (manual and automatic) showed significant improvement as ProB's evaluator proves complementary to Atelier B built-in provers. We conducted experiments with the B model of a micro-controller instruction set; several verification conditions, that we were not able to discharge automatically or manually with AtelierB's provers, were automatically verified using BEval.Comment: In Proceedings LAFM 2013, arXiv:1401.056

    Methods and examples of model validation : an annotated bibliography

    Get PDF

    Discovering ePassport Vulnerabilities using Bisimilarity

    Get PDF
    We uncover privacy vulnerabilities in the ICAO 9303 standard implemented by ePassports worldwide. These vulnerabilities, confirmed by ICAO, enable an ePassport holder who recently passed through a checkpoint to be reidentified without opening their ePassport. This paper explains how bisimilarity was used to discover these vulnerabilities, which exploit the BAC protocol - the original ICAO 9303 standard ePassport authentication protocol - and remains valid for the PACE protocol, which improves on the security of BAC in the latest ICAO 9303 standards. In order to tackle such bisimilarity problems, we develop here a chain of methods for the applied π\pi-calculus including a symbolic under-approximation of bisimilarity, called open bisimilarity, and a modal logic, called classical FM, for describing and certifying attacks. Evidence is provided to argue for a new scheme for specifying such unlinkability problems that more accurately reflects the capabilities of an attacker

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 10980 and 10981 constitutes the refereed proceedings of the 30th International Conference on Computer Aided Verification, CAV 2018, held in Oxford, UK, in July 2018. The 52 full and 13 tool papers presented together with 3 invited papers and 2 tutorials were carefully reviewed and selected from 215 submissions. The papers cover a wide range of topics and techniques, from algorithmic and logical foundations of verification to practical applications in distributed, networked, cyber-physical, and autonomous systems. They are organized in topical sections on model checking, program analysis using polyhedra, synthesis, learning, runtime verification, hybrid and timed systems, tools, probabilistic systems, static analysis, theory and security, SAT, SMT and decisions procedures, concurrency, and CPS, hardware, industrial applications

    Invariant discovery and refinement plans for formal modelling in Event-B

    Get PDF
    The continuous growth of complex systems makes the development of correct software increasingly challenging. In order to address this challenge, formal methods o er rigorous mathematical techniques to model and verify the correctness of systems. Refinement is one of these techniques. By allowing a developer to incrementally introduce design details, refinement provides a powerful mechanism for mastering the complexities that arise when formally modelling systems. Here the focus is on a posit-and-prove style of refinement, where a design is developed as a series of abstract models introduced via refinement steps. Each refinement step generates proof obligations which must be discharged in order to verify its correctness – typically requiring a user to understand the relationship between modelling and reasoning. This thesis focuses on techniques to aid refinement-based formal modelling, specifically, when a user requires guidance in order to overcome a failed refinement step. An integrated approach has been followed: combining the complementary strengths of bottomup theory formation, in which theories about domains are built based on basic background information; and top-down planning, in which meta-level reasoning is used to guide the search for correct models. On the theory formation perspective, we developed a technique for the automatic discovery of invariants. Refinement requires the definition of properties, called invariants, which relate to the design. Formulating correct and meaningful invariants can be tedious and a challenging task. A heuristic approach to the automatic discovery of invariants has been developed building upon simulation, proof-failure analysis and automated theory formation. This approach exploits the close interplay between modelling and reasoning in order to provide systematic guidance in tailoring the search for invariants for a given model. On the planning perspective, we propose a new technique called refinement plans. Refinement plans provide a basis for automatically generating modelling guidance when a step fails but is close to a known pattern of refinement. This technique combines both modelling and reasoning knowledge, and, contrary to traditional pattern techniques, allow the analysis of failure and partial matching. Moreover, when the guidance is only partially instantiated, and it is suitable, refinement plans provide specialised knowledge to further tailor the theory formation process in an attempt to fully instantiate the guidance. We also report on a series of experiments undertaken in order to evaluate the approaches and on the implementation of both techniques into prototype tools. We believe the techniques presented here allow the developer to focus on design decisions rather than on analysing low-level proof failures

    Automated discovery of inductive lemmas

    Get PDF
    The discovery of unknown lemmas, case-splits and other so called eureka steps are challenging problems for automated theorem proving and have generally been assumed to require user intervention. This thesis is mainly concerned with the automated discovery of inductive lemmas. We have explored two approaches based on failure recovery and theory formation, with the aim of improving automation of firstand higher-order inductive proofs in the IsaPlanner system. We have implemented a lemma speculation critic which attempts to find a missing lemma using information from a failed proof-attempt. However, we found few proofs for which this critic was applicable and successful. We have also developed a program for inductive theory formation, which we call IsaCoSy. IsaCoSy was evaluated on different inductive theories about natural numbers, lists and binary trees, and found to successfully produce many relevant theorems and lemmas. Using a background theory produced by IsaCoSy, it was possible for IsaPlanner to automatically prove more new theorems than with lemma speculation. In addition to the lemma discovery techniques, we also implemented an automated technique for case-analysis. This allows IsaPlanner to deal with proofs involving conditionals, expressed as if- or case-statements. ii

    Towards a Critically Performative Foundation for Inquiry

    Get PDF
    The thesis sets out to study regulatory innovation inside government from the perspective of user innovation and to do so in a way that is critically performative. The empirical subject matter is ‘laboratories’ (Da. Styringslaboratorier): a form of innovation process focused on developing ‘regulatory innovations’ (i.e. administrative innovations used for purposes of regulating public sector organizations) in collaboration between regulators and users. This particular form of innovation process has been the subject of considerable debate in Denmark and been suggested as a way forward in public sector modernization after New Public Management. It is, however, also an underspecified phenomenon: while it is attributed some potential, it is unclear what this potential is and what it is about laboratories that make this potential plausible. We have only vague ideas about what gets done when people do laboratories
    corecore