14 research outputs found

    BEval: A Plug-in to Extend Atelier B with Current Verification Technologies

    Full text link
    This paper presents BEval, an extension of Atelier B to improve automation in the verification activities in the B method or Event-B. It combines a tool for managing and verifying software projects (Atelier B) and a model checker/animator (ProB) so that the verification conditions generated in the former are evaluated with the latter. In our experiments, the two main verification strategies (manual and automatic) showed significant improvement as ProB's evaluator proves complementary to Atelier B built-in provers. We conducted experiments with the B model of a micro-controller instruction set; several verification conditions, that we were not able to discharge automatically or manually with AtelierB's provers, were automatically verified using BEval.Comment: In Proceedings LAFM 2013, arXiv:1401.056

    Encoding TLA+ into Many-Sorted First-Order Logic

    Get PDF
    International audienceThis paper presents an encoding of a non-temporal fragment of the TLA+ language, which includes untyped set theory, functions, arithmetic expressions, and Hilbert's ε operator, into many-sorted first-order logic, the input language of state-of-the-art SMT solvers. This translation, based on encoding techniques such as boolification, injection of unsorted expressions into sorted languages, term rewriting, and abstraction, is the core component of a back-end prover based on SMT solvers for the TLA+ Proof System

    Live Programming for Finite Model Finders

    Full text link
    Finite model finders give users the ability to specify properties of a system in mathematical logic and then automatically find concrete examples, called solutions, that satisfy the properties. These solutions are often viewed as a key benefit of model finders, as they create an exploratory environment for developers to engage with their model. In practice, users find less benefit from these solutions than expected. For years, researchers believed that the problem was that too many solutions are produced. However, a recent user study found that users actually prefer enumerating a broad set of solutions. Inspired by a recent user study on Alloy, a modeling language backed by a finite model finder, we believe that the issue is that solutions are too removed from the logical constraints that generate them to help users build an understanding of the constraints themselves. In this paper, we outline a proof-of-concept for live programming of Alloy models in which writing the model and exploring solutions are intertwined. We highlight how this development environment enables more productive feedback loops between the developer, the model and the solutions

    TLA+ Model Checking Made Symbolic

    Get PDF
    International audienceTLA + is a language for formal specification of all kinds of computer systems. System designers use this language to specify concurrent, distributed, and fault-tolerant protocols, which are traditionally presented in pseudo-code. TLA + is extremely concise yet expressive: The language primitives include Booleans, integers, functions, tuples, records, sequences, and sets thereof, which can be also nested. This is probably why the only model checker for TLA + (called TLC) relies on explicit enumeration of values and states. In this paper, we present APALACHE-a first symbolic model checker for TLA +. Like TLC, it assumes that all specification parameters are fixed and all states are finite structures. Unlike TLC, APALACHE translates the underlying transition relation into quantifier-free SMT constraints, which allows us to exploit the power of SMT solvers. Designing this translation is the central challenge that we address in this paper. Our experiments show that APALACHE outperforms TLC on examples with large state spaces

    A Comprehensive Study of Declarative Modelling Languages

    Get PDF
    Declarative behavioural modelling is a powerful modelling paradigm that enables users to model system functionality abstractly and formally. An abstract model is a concise and compact representation of key characteristics of a system, and enables the stakeholders to reason about the correctness of the system in the early stages of development. There are many different declarative languages and they have greatly varying constructs for representing a transition system, and they sometimes differ in rather subtle ways. In this thesis, we compare seven formal declarative modelling languages B, Event-B, Alloy, Dash, TLA+, PlusCal, and AsmetaL on several criteria. We classify these criteria under three main categories: structuring transition systems (control modelling), data descriptions in transition systems (data modelling), and modularity aspects of modelling. We developed this comparison by completing a set of case studies across the data- vs. control-oriented spectrum in all of the above languages. Structurally, a transition system is comprised of a snapshot declaration and snapshot space, initialization, and a transition relation, which is potentially composed of individual transitions. We meticulously outline the differences between the languages with respect to how the modeller would express each of the above components of a transition system in each language, and include discussions regarding stuttering and inconsistencies in the transition relation. Data-related aspects of a formal model include use of basic and composite datatypes, well-formedness and typechecking, and separation of name spaces with respect to global and local variables. Modularity criteria includes subtransition systems and data decomposition. We employ a series of small and concise exemplars we have devised to highlight these differences in each language. To help modellers answer the important question of which declarative modelling language may be most suited for modelling their system, we present recommendations based on our observations about the differentiating characteristics of each of these languages

    Debugging Relational Declarative Models with Discriminating Examples

    Get PDF
    Models, especially those with mathematical or logical foundations, have proven valuable to engineering practice in a wide range of disciplines, including software engineering. Models, sometimes also referred to as logical specifications in this context, enable software engineers to focus on essential abstractions, while eliding less important details of their software design. Like any human-created artifact, a model might have imperfections at certain stages of the design process: it might have internal inconsistencies, or it might not properly express the engineer’s design intentions. Validating that the model is a true expression of the engineer’s intent is an important and difficult problem. One of the key challenges is that there is typically no other written artifact to compare the model to: the engineer’s intention is a mental object. One successful approach to this challenge has been automated example-generation tools, such as the Alloy Analyzer. These tools produce examples (satisfying valuations of the model) for the engineer to accept or reject. These examples, along with the engineer’s judgment of them, serve as crucial written artifacts of the engineer’s true intentions. Examples, like test-cases for programs, are more valuable if they reveal a discrepancy between the expressed model and the engineer’s design intentions. We propose the idea of discriminating examples for this purpose. A discriminating example is synthesized from a combination of the engineer’s expressed model and a machine-generated hypothesis of the engineer’s true intentions. A discriminating example either satisfies the model but not the hypothesis, or satisfies the hypothesis but not the model. It shows the difference between the model and the hypothesized alternative. The key to producing high-quality discriminating examples is to generate high-quality hypotheses. This dissertation explores three general forms of such hypotheses: mistakes that happen near borders; the expressed model is stronger than the engineer intends; or the expressed model is weaker than the engineer intends. We additionally propose a number of heuristics to guide the hypothesis-generation process. We demonstrate the usefulness of discriminating examples and our hypothesis-generation techniques through a case study of an Alloy model of Dijkstra’s Dining Philosophers problem. This model was written by Alloy experts and shipped with the Alloy Analyzer for several years. Previous researchers discovered the existence of a bug, but there has been no prior published account explaining how to fix it, nor has any prior tool been shown effective for assisting an engineer with this task. Generating high-quality discriminating examples and their underlying hypotheses is computationally demanding. This dissertation shows how to make it feasible
    corecore