6,973 research outputs found

    Applicative matching logic

    Get PDF
    This paper proposes a logic for programming languages, which is both simple and expressive, to serve as a foundation for language semantics frameworks. Matching mu-logic has been recently proposed as a unifying foundation for programming languages, specification and verification. It has been shown to capture several logics important for programming languages, including first-order logic with least fixpoints, separation logic, temporal logics, modal mu-logic, and importantly, reachability logic, a language-independent logic for program verification that subsumes Hoare logic. This paper identifies a fragment of matching mu-logic called applicative matching logic (AML), which is much simpler and thus more appealing from a foundational perspective, yet as expressive as matching mu-logic. Several additional logical frameworks fundamental for programming languages are shown to be faithfully captured by AML, including many- and order-sorted algebras, lambda-calculus, (dependent) type systems, evaluation contexts, and rewriting. Finally, it is shown how all these make AML an appropriate underlying logic foundation for complex language semantics frameworks, such as K.Ope

    Abstract Certification of Java Programs in Rewriting Logic

    Full text link
    In this thesis we propose an abstraction based certification technique for Java programs which is based on rewriting logic, a very general logical and semantic framework efficiently implemented in the functional programming language Maude. We focus on safety properties, i.e. properties of a system that are defined in terms of certain events not happening, which we characterize as unreachability problems in rewriting logic. The safety policy is expressed in the style of JML, a standard property specification language for Java modules. In order to provide a decision procedure, we enforce finite-state models of programs by using abstract interpretation. Starting from a specification of the Java semantics written in Maude, we develop an abstraction based, finite-state operational semantics also written in Maude which is appropriate for program verification. As a by-product of the verification based on abstraction, a dependable safety certificate is delivered which consists of a set of rewriting proofs that can be easily checked by the code consumer by using a standard rewriting logic engine. The abstraction based proof-carrying code technique, called JavaPCC, has been implemented and successfully tested on several examples, which demonstrate the feasibility of our approach. We analyse local properties of Java methods: i.e. properties of methods regarding their parameters and results. We also study global confidentiality properties of complete Java classes, by initially considering non--interference and, then, erasure with and without non--interference. Non--interference is a semantic program property that assigns confidentiality levels to data objects and prevents illicit information flows from occurring from high to low security levels. In this thesis, we present a novel security model for global non--interference which approximates non--interference as a safety property.Alba Castro, MF. (2011). Abstract Certification of Java Programs in Rewriting Logic [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/13617Palanci

    Trustworthy Refactoring via Decomposition and Schemes: A Complex Case Study

    Get PDF
    Widely used complex code refactoring tools lack a solid reasoning about the correctness of the transformations they implement, whilst interest in proven correct refactoring is ever increasing as only formal verification can provide true confidence in applying tool-automated refactoring to industrial-scale code. By using our strategic rewriting based refactoring specification language, we present the decomposition of a complex transformation into smaller steps that can be expressed as instances of refactoring schemes, then we demonstrate the semi-automatic formal verification of the components based on a theoretical understanding of the semantics of the programming language. The extensible and verifiable refactoring definitions can be executed in our interpreter built on top of a static analyser framework.Comment: In Proceedings VPT 2017, arXiv:1708.0688

    Rewriting Logic Semantics of a Plan Execution Language

    Get PDF
    The Plan Execution Interchange Language (PLEXIL) is a synchronous language developed by NASA to support autonomous spacecraft operations. In this paper, we propose a rewriting logic semantics of PLEXIL in Maude, a high-performance logical engine. The rewriting logic semantics is by itself a formal interpreter of the language and can be used as a semantic benchmark for the implementation of PLEXIL executives. The implementation in Maude has the additional benefit of making available to PLEXIL designers and developers all the formal analysis and verification tools provided by Maude. The formalization of the PLEXIL semantics in rewriting logic poses an interesting challenge due to the synchronous nature of the language and the prioritized rules defining its semantics. To overcome this difficulty, we propose a general procedure for simulating synchronous set relations in rewriting logic that is sound and, for deterministic relations, complete. We also report on two issues at the design level of the original PLEXIL semantics that were identified with the help of the executable specification in Maude

    A Graph-Based Semantics Workbench for Concurrent Asynchronous Programs

    Get PDF
    A number of novel programming languages and libraries have been proposed that offer simpler-to-use models of concurrency than threads. It is challenging, however, to devise execution models that successfully realise their abstractions without forfeiting performance or introducing unintended behaviours. This is exemplified by SCOOP---a concurrent object-oriented message-passing language---which has seen multiple semantics proposed and implemented over its evolution. We propose a "semantics workbench" with fully and semi-automatic tools for SCOOP, that can be used to analyse and compare programs with respect to different execution models. We demonstrate its use in checking the consistency of semantics by applying it to a set of representative programs, and highlighting a deadlock-related discrepancy between the principal execution models of the language. Our workbench is based on a modular and parameterisable graph transformation semantics implemented in the GROOVE tool. We discuss how graph transformations are leveraged to atomically model intricate language abstractions, and how the visual yet algebraic nature of the model can be used to ascertain soundness.Comment: Accepted for publication in the proceedings of FASE 2016 (to appear

    Model Checking Linear Logic Specifications

    Full text link
    The overall goal of this paper is to investigate the theoretical foundations of algorithmic verification techniques for first order linear logic specifications. The fragment of linear logic we consider in this paper is based on the linear logic programming language called LO enriched with universally quantified goal formulas. Although LO was originally introduced as a theoretical foundation for extensions of logic programming languages, it can also be viewed as a very general language to specify a wide range of infinite-state concurrent systems. Our approach is based on the relation between backward reachability and provability highlighted in our previous work on propositional LO programs. Following this line of research, we define here a general framework for the bottom-up evaluation of first order linear logic specifications. The evaluation procedure is based on an effective fixpoint operator working on a symbolic representation of infinite collections of first order linear logic formulas. The theory of well quasi-orderings can be used to provide sufficient conditions for the termination of the evaluation of non trivial fragments of first order linear logic.Comment: 53 pages, 12 figures "Under consideration for publication in Theory and Practice of Logic Programming

    Finite Countermodel Based Verification for Program Transformation (A Case Study)

    Get PDF
    Both automatic program verification and program transformation are based on program analysis. In the past decade a number of approaches using various automatic general-purpose program transformation techniques (partial deduction, specialization, supercompilation) for verification of unreachability properties of computing systems were introduced and demonstrated. On the other hand, the semantics based unfold-fold program transformation methods pose themselves diverse kinds of reachability tasks and try to solve them, aiming at improving the semantics tree of the program being transformed. That means some general-purpose verification methods may be used for strengthening program transformation techniques. This paper considers the question how finite countermodels for safety verification method might be used in Turchin's supercompilation method. We extract a number of supercompilation sub-algorithms trying to solve reachability problems and demonstrate use of an external countermodel finder for solving some of the problems.Comment: In Proceedings VPT 2015, arXiv:1512.0221

    Memory usage verification using Hip/Sleek.

    Get PDF
    Embedded systems often come with constrained memory footprints. It is therefore essential to ensure that software running on such platforms fulfils memory usage specifications at compile-time, to prevent memory-related software failure after deployment. Previous proposals on memory usage verification are not satisfactory as they usually can only handle restricted subsets of programs, especially when shared mutable data structures are involved. In this paper, we propose a simple but novel solution. We instrument programs with explicit memory operations so that memory usage verification can be done along with the verification of other properties, using an automated verification system Hip/Sleek developed recently by Chin et al.[10,19]. The instrumentation can be done automatically and is proven sound with respect to an underlying semantics. One immediate benefit is that we do not need to develop from scratch a specific system for memory usage verification. Another benefit is that we can verify more programs, especially those involving shared mutable data structures, which previous systems failed to handle, as evidenced by our experimental results
    corecore