36,236 research outputs found

    Generating Efficient, Terminating Logic Programs

    Get PDF
    The objective of control generation in logic programming is to automatically derive a computation rule for a program that is efficient and yet does not compromise program correctness. Progress in solving this important problem has been slow and, to date, only partial solutions have been proposed where the generated programs are either incorrect or inefficient. We show how the control generation problem can be tackled with a simple automatic transformation that relies on information about the depths of derivations. To prove correctness of our transform we introduce the notion of a semi delay recurrent program which generalises previous ideas in the termination literature for reasoning about logic programs with dynamic selection rules

    Fifty years of Hoare's Logic

    Get PDF
    We present a history of Hoare's logic.Comment: 79 pages. To appear in Formal Aspects of Computin

    Permission-Based Separation Logic for Multithreaded Java Programs

    Get PDF
    This paper motivates and presents a program logic for reasoning about multithreaded Java-like programs with concurrency primitives such as dynamic thread creation, thread joining and reentrant object monitors. The logic is based on concurrent separation logic. It is the first detailed adaptation of concurrent separation logic to a multithreaded Java-like language. The program logic associates a unique static access permission with each heap location, ensuring exclusive write accesses and ruling out data races. Concurrent reads are supported through fractional permissions. Permissions can be transferred between threads upon thread starting, thread joining, initial monitor entrancies and final monitor exits.\ud This paper presents the basic principles to reason about thread creation and thread joining. It finishes with an outlook how this logic will evolve into a full-fledged verification technique for Java (and possibly other multithreaded languages)

    Computer-Assisted Program Reasoning Based on a Relational Semantics of Programs

    Full text link
    We present an approach to program reasoning which inserts between a program and its verification conditions an additional layer, the denotation of the program expressed in a declarative form. The program is first translated into its denotation from which subsequently the verification conditions are generated. However, even before (and independently of) any verification attempt, one may investigate the denotation itself to get insight into the "semantic essence" of the program, in particular to see whether the denotation indeed gives reason to believe that the program has the expected behavior. Errors in the program and in the meta-information may thus be detected and fixed prior to actually performing the formal verification. More concretely, following the relational approach to program semantics, we model the effect of a program as a binary relation on program states. A formal calculus is devised to derive from a program a logic formula that describes this relation and is subject for inspection and manipulation. We have implemented this idea in a comprehensive form in the RISC ProgramExplorer, a new program reasoning environment for educational purposes which encompasses the previously developed RISC ProofNavigator as an interactive proving assistant.Comment: In Proceedings THedu'11, arXiv:1202.453

    Correctness and completeness of logic programs

    Full text link
    We discuss proving correctness and completeness of definite clause logic programs. We propose a method for proving completeness, while for proving correctness we employ a method which should be well known but is often neglected. Also, we show how to prove completeness and correctness in the presence of SLD-tree pruning, and point out that approximate specifications simplify specifications and proofs. We compare the proof methods to declarative diagnosis (algorithmic debugging), showing that approximate specifications eliminate a major drawback of the latter. We argue that our proof methods reflect natural declarative thinking about programs, and that they can be used, formally or informally, in every-day programming.Comment: 29 pages, 2 figures; with editorial modifications, small corrections and extensions. arXiv admin note: text overlap with arXiv:1411.3015. Overlaps explained in "Related Work" (p. 21

    The Meaning of Memory Safety

    Full text link
    We give a rigorous characterization of what it means for a programming language to be memory safe, capturing the intuition that memory safety supports local reasoning about state. We formalize this principle in two ways. First, we show how a small memory-safe language validates a noninterference property: a program can neither affect nor be affected by unreachable parts of the state. Second, we extend separation logic, a proof system for heap-manipulating programs, with a memory-safe variant of its frame rule. The new rule is stronger because it applies even when parts of the program are buggy or malicious, but also weaker because it demands a stricter form of separation between parts of the program state. We also consider a number of pragmatically motivated variations on memory safety and the reasoning principles they support. As an application of our characterization, we evaluate the security of a previously proposed dynamic monitor for memory safety of heap-allocated data.Comment: POST'18 final versio
    • …
    corecore