794 research outputs found

    Synthesizing Modular Invariants for Synchronous Code

    Full text link
    In this paper, we explore different techniques to synthesize modular invariants for synchronous code encoded as Horn clauses. Modular invariants are a set of formulas that characterizes the validity of predicates. They are very useful for different aspects of analysis, synthesis, testing and program transformation. We describe two techniques to generate modular invariants for code written in the synchronous dataflow language Lustre. The first technique directly encodes the synchronous code in a modular fashion. While in the second technique, we synthesize modular invariants starting from a monolithic invariant. Both techniques, take advantage of analysis techniques based on property-directed reachability. We also describe a technique to minimize the synthesized invariants.Comment: In Proceedings HCVS 2014, arXiv:1412.082

    Smart test data generators via logic programming

    Get PDF
    We present a novel counterexample generator for the interactive theorem prover Isabelle based on a compiler that synthesizes test data generators for functional programming languages (e.g. Standard ML, OCaml) from specifications in Isabelle. In contrast to naive type-based test data generators, the smart generators take the preconditions into account and only generate tests that fulfill the preconditions. The smart generators are constructed by a compiler that reformulates the preconditions as logic programs and analyzes them by an enriched mode inference. From this inference, the compiler can construct the desired generators in the functional programming language. These test data generators are applied to find errors in specifications, as we show in a case study of a hotel key card system

    Spatial Interpolants

    Full text link
    We propose Splinter, a new technique for proving properties of heap-manipulating programs that marries (1) a new separation logic-based analysis for heap reasoning with (2) an interpolation-based technique for refining heap-shape invariants with data invariants. Splinter is property directed, precise, and produces counterexample traces when a property does not hold. Using the novel notion of spatial interpolants modulo theories, Splinter can infer complex invariants over general recursive predicates, e.g., of the form all elements in a linked list are even or a binary tree is sorted. Furthermore, we treat interpolation as a black box, which gives us the freedom to encode data manipulation in any suitable theory for a given program (e.g., bit vectors, arrays, or linear arithmetic), so that our technique immediately benefits from any future advances in SMT solving and interpolation.Comment: Short version published in ESOP 201

    Semantic optimisation in datalog programs

    Get PDF
    Bibliography: leaves 138-142.Datalog is the fusion of Prolog and Database technologies aimed at producing an efficient, logic-based, declarative language for databases. This fusion takes the best of logic programming for the syntax of Datalog, and the best of database systems for the operational part of Datalog. As is the case with all declarative languages, optimisation is necessary to improve the efficiency of programs. Semantic optimisation uses meta-knowledge describing the data in the database to optimise queries and rules, aiming to reduce the resources required to answer queries. In this thesis, I analyse prior work that has been done on semantic optimisation and then propose an optimisation system for Datalog that includes optimisation of recursive programs and a semantic knowledge management module. A language, DatalogiC, which is an extension of Datalog that allows semantic knowledge to be expressed, has also been devised as an implementation vehicle. Finally, empirical results concerning the benefits of semantic optimisation are reported

    Design, application and implementation of a paralled logic programming language

    Get PDF
    Imperial Users onl

    Incremental and Modular Context-sensitive Analysis

    Full text link
    Context-sensitive global analysis of large code bases can be expensive, which can make its use impractical during software development. However, there are many situations in which modifications are small and isolated within a few components, and it is desirable to reuse as much as possible previous analysis results. This has been achieved to date through incremental global analysis fixpoint algorithms that achieve cost reductions at fine levels of granularity, such as changes in program lines. However, these fine-grained techniques are not directly applicable to modular programs, nor are they designed to take advantage of modular structures. This paper describes, implements, and evaluates an algorithm that performs efficient context-sensitive analysis incrementally on modular partitions of programs. The experimental results show that the proposed modular algorithm shows significant improvements, in both time and memory consumption, when compared to existing non-modular, fine-grain incremental analysis techniques. Furthermore, thanks to the proposed inter-modular propagation of analysis information, our algorithm also outperforms traditional modular analysis even when analyzing from scratch.Comment: 56 pages, 27 figures. To be published in Theory and Practice of Logic Programming. v3 corresponds to the extended version of the ICLP2018 Technical Communication. v4 is the revised version submitted to Theory and Practice of Logic Programming. v5 (this one) is the final author version to be published in TPL

    From MinX to MinC: Semantics-Driven Decompilation of Recursive Datatypes

    Get PDF
    Reconstructing the meaning of a program from its binary executable is known as reverse engineering; it has a wide range of applications in software security, exposing piracy, legacy systems, etc. Since reversing is ultimately a search for meaning, there is much interest in inferring a type (a meaning) for the elements of a binary in a consistent way. Unfortunately existing approaches do not guarantee any semantic relevance for their reconstructed types. This paper presents a new and semantically-founded approach that provides strong guarantees for the reconstructed types. Key to our approach is the derivation of a witness program in a high-level language alongside the reconstructed types. This witness has the same semantics as the binary, is type correct by construction, and it induces a (justifiable) type assignment on the binary. Moreover, the approach effectively yields a type-directed decompiler. We formalise and implement the approach for reversing Minx, an abstraction of x86, to MinC, a type-safe dialect of C with recursive datatypes. Our evaluation compiles a range of textbook C algorithms to MinX and then recovers the original structures

    Trading inference effort versus size in CNF Knowledge Compilation

    Full text link
    Knowledge Compilation (KC) studies compilation of boolean functions f into some formalism F, which allows to answer all queries of a certain kind in polynomial time. Due to its relevance for SAT solving, we concentrate on the query type "clausal entailment" (CE), i.e., whether a clause C follows from f or not, and we consider subclasses of CNF, i.e., clause-sets F with special properties. In this report we do not allow auxiliary variables (except of the Outlook), and thus F needs to be equivalent to f. We consider the hierarchies UC_k <= WC_k, which were introduced by the authors in 2012. Each level allows CE queries. The first two levels are well-known classes for KC. Namely UC_0 = WC_0 is the same as PI as studied in KC, that is, f is represented by the set of all prime implicates, while UC_1 = WC_1 is the same as UC, the class of unit-refutation complete clause-sets introduced by del Val 1994. We show that for each k there are (sequences of) boolean functions with polysize representations in UC_{k+1}, but with an exponential lower bound on representations in WC_k. Such a separation was previously only know for k=0. We also consider PC < UC, the class of propagation-complete clause-sets. We show that there are (sequences of) boolean functions with polysize representations in UC, while there is an exponential lower bound for representations in PC. These separations are steps towards a general conjecture determining the representation power of the hierarchies PC_k < UC_k <= WC_k. The strong form of this conjecture also allows auxiliary variables, as discussed in depth in the Outlook.Comment: 43 pages, second version with literature updates. Proceeds with the separation results from the discontinued arXiv:1302.442

    The SeaHorn Verification Framework

    Get PDF
    In this paper, we present SeaHorn, a software verification framework. The key distinguishing feature of SeaHorn is its modular design that separates the concerns of the syntax of the programming language, its operational semantics, and the verification semantics. SeaHorn encompasses several novelties: it (a) encodes verification conditions using an efficient yet precise inter-procedural technique, (b) provides flexibility in the verification semantics to allow different levels of precision, (c) leverages the state-of-the-art in software model checking and abstract interpretation for verification, and (d) uses Horn-clauses as an intermediate language to represent verification conditions which simplifies interfacing with multiple verification tools based on Horn-clauses. SeaHorn provides users with a powerful verification tool and researchers with an extensible and customizable framework for experimenting with new software verification techniques. The effectiveness and scalability of SeaHorn are demonstrated by an extensive experimental evaluation using benchmarks from SV-COMP 2015 and real avionics code
    • …
    corecore