24 research outputs found

    Decomposition by tree dimension in Horn clause verification

    Get PDF
    This volume contains the papers selected among those which were presented at the 3rd International Workshop on Verification and Program Transformation (VPT 2015) held in London, UK, on April 11th, 2015. Previous editions of the Workshop were held at Saint-Petersburg (Russia) in 2013, and Vienna (Austria) in 2014. Those papers show that methods and tools developed in the field of program transformation such as partial evaluation and fold/unfold transformations, and supercompilation, can be applied in the verification of software systems. They also show how some program verification methods, such as model checking techniques, abstract interpretation, SAT and SMT solving, and automated theorem proving, can be used to enhance program transformation techniques, thereby making these techniques more powerful and useful in practice

    Proving Correctness of Imperative Programs by Linearizing Constrained Horn Clauses

    Full text link
    We present a method for verifying the correctness of imperative programs which is based on the automated transformation of their specifications. Given a program prog, we consider a partial correctness specification of the form {φ}\{\varphi\} prog {ψ}\{\psi\}, where the assertions φ\varphi and ψ\psi are predicates defined by a set Spec of possibly recursive Horn clauses with linear arithmetic (LA) constraints in their premise (also called constrained Horn clauses). The verification method consists in constructing a set PC of constrained Horn clauses whose satisfiability implies that {φ}\{\varphi\} prog {ψ}\{\psi\} is valid. We highlight some limitations of state-of-the-art constrained Horn clause solving methods, here called LA-solving methods, which prove the satisfiability of the clauses by looking for linear arithmetic interpretations of the predicates. In particular, we prove that there exist some specifications that cannot be proved valid by any of those LA-solving methods. These specifications require the proof of satisfiability of a set PC of constrained Horn clauses that contain nonlinear clauses (that is, clauses with more than one atom in their premise). Then, we present a transformation, called linearization, that converts PC into a set of linear clauses (that is, clauses with at most one atom in their premise). We show that several specifications that could not be proved valid by LA-solving methods, can be proved valid after linearization. We also present a strategy for performing linearization in an automatic way and we report on some experimental results obtained by using a preliminary implementation of our method.Comment: To appear in Theory and Practice of Logic Programming (TPLP), Proceedings of ICLP 201

    Memoized zipper-based attribute grammars and their higher order extension

    Get PDF
    Attribute grammars are a powerfull, well-known formalism to implement and reason about programs which, by design, are conveniently modular. In this work we focus on a state of the art zipper-based embedding of classic attribute grammars and higher-order attribute grammars. We improve their execution performance through controlling attribute (re)evaluation by means of memoization techniques. We present the results of our optimizations by comparing their impact in various implementations of different, well-studied, attribute grammars and their Higher-Order extensions. (C) 2018 Elsevier B.V. All rights reserved.- (undefined

    Improving dynamic code analysis by code abstraction

    Get PDF
    In this paper, our aim is to propose a model for code abstraction, based on abstract interpretation, allowing us to improve the precision of a recently proposed static analysis by abstract interpretation of dynamic languages. The problem we tackle here is that the analysis may add some spurious code to the string-to-execute abstract value and this code may need some abstract representations in order to make it analyzable. This is precisely what we propose here, where we drive the code abstraction by the analysis we have to perform

    Interpolant tree automata and their application in Horn clause verification

    Get PDF
    This paper investigates the combination of abstract interpretation over the domain of convex polyhedra with interpolant tree automata, in an abstraction-refinement scheme for Horn clause verification. These techniques have been previously applied separately, but are combined in a new way in this paper. The role of an interpolant tree automaton is to provide a generalisation of a spurious counterexample during refinement, capturing a possibly infinite set of spurious counterexample traces. In our approach these traces are then eliminated using a transformation of the Horn clauses. We compare this approach with two other methods; one of them uses interpolant tree automata in an algorithm for trace abstraction and refinement, while the other uses abstract interpretation over the domain of convex polyhedra without the generalisation step. Evaluation of the results of experiments on a number of Horn clause verification problems indicates that the combination of interpolant tree automaton with abstract interpretation gives some increase in the power of the verification tool, while sometimes incurring a performance overhead.Comment: In Proceedings VPT 2016, arXiv:1607.0183

    Optimal Dyck reachability for data-dependence and Alias analysis

    Get PDF
    A fundamental algorithmic problem at the heart of static analysis is Dyck reachability. The input is a graph where the edges are labeled with different types of opening and closing parentheses, and the reachability information is computed via paths whose parentheses are properly matched. We present new results for Dyck reachability problems with applications to alias analysis and data-dependence analysis. Our main contributions, that include improved upper bounds as well as lower bounds that establish optimality guarantees, are as follows: First, we consider Dyck reachability on bidirected graphs, which is the standard way of performing field-sensitive points-to analysis. Given a bidirected graph with n nodes and m edges, we present: (i) an algorithm with worst-case running time O(m + n · α(n)), where α(n) is the inverse Ackermann function, improving the previously known O(n2) time bound; (ii) a matching lower bound that shows that our algorithm is optimal wrt to worst-case complexity; and (iii) an optimal average-case upper bound of O(m) time, improving the previously known O(m · logn) bound. Second, we consider the problem of context-sensitive data-dependence analysis, where the task is to obtain analysis summaries of library code in the presence of callbacks. Our algorithm preprocesses libraries in almost linear time, after which the contribution of the library in the complexity of the client analysis is only linear, and only wrt the number of call sites. Third, we prove that combinatorial algorithms for Dyck reachability on general graphs with truly sub-cubic bounds cannot be obtained without obtaining sub-cubic combinatorial algorithms for Boolean Matrix Multiplication, which is a long-standing open problem. Thus we establish that the existing combinatorial algorithms for Dyck reachability are (conditionally) optimal for general graphs. We also show that the same hardness holds for graphs of constant treewidth. Finally, we provide a prototype implementation of our algorithms for both alias analysis and data-dependence analysis. Our experimental evaluation demonstrates that the new algorithms significantly outperform all existing methods on the two problems, over real-world benchmarks

    Components for automatic horn clause verification

    Get PDF
    corecore