1,391 research outputs found

    Components for automatic horn clause verification

    Get PDF

    Combining Forward and Backward Abstract Interpretation of Horn Clauses

    Full text link
    Alternation of forward and backward analyses is a standard technique in abstract interpretation of programs, which is in particular useful when we wish to prove unreachability of some undesired program states. The current state-of-the-art technique for combining forward (bottom-up, in logic programming terms) and backward (top-down) abstract interpretation of Horn clauses is query-answer transformation. It transforms a system of Horn clauses, such that standard forward analysis can propagate constraints both forward, and backward from a goal. Query-answer transformation is effective, but has issues that we wish to address. For that, we introduce a new backward collecting semantics, which is suitable for alternating forward and backward abstract interpretation of Horn clauses. We show how the alternation can be used to prove unreachability of the goal and how every subsequent run of an analysis yields a refined model of the system. Experimentally, we observe that combining forward and backward analyses is important for analysing systems that encode questions about reachability in C programs. In particular, the combination that follows our new semantics improves the precision of our own abstract interpreter, including when compared to a forward analysis of a query-answer-transformed system.Comment: Francesco Ranzato. 24th International Static Analysis Symposium (SAS), Aug 2017, New York City, United States. Springer, Static Analysi

    Solving non-linear Horn clauses using a linear Horn clause solver

    Get PDF
    In this paper we show that checking satisfiability of a set of non-linear Horn clauses (also called a non-linear Horn clause program) can be achieved using a solver for linear Horn clauses. We achieve this by interleaving a program transformation with a satisfiability checker for linear Horn clauses (also called a solver for linear Horn clauses). The program transformation is based on the notion of tree dimension, which we apply to a set of non-linear clauses, yielding a set whose derivation trees have bounded dimension. Such a set of clauses can be linearised. The main algorithm then proceeds by applying the linearisation transformation and solver for linear Horn clauses to a sequence of sets of clauses with successively increasing dimension bound. The approach is then further developed by using a solution of clauses of lower dimension to (partially) linearise clauses of higher dimension. We constructed a prototype implementation of this approach and performed some experiments on a set of verification problems, which shows some promise.Comment: In Proceedings HCVS2016, arXiv:1607.0403

    Decomposition by tree dimension in Horn clause verification

    Get PDF
    This volume contains the papers selected among those which were presented at the 3rd International Workshop on Verification and Program Transformation (VPT 2015) held in London, UK, on April 11th, 2015. Previous editions of the Workshop were held at Saint-Petersburg (Russia) in 2013, and Vienna (Austria) in 2014. Those papers show that methods and tools developed in the field of program transformation such as partial evaluation and fold/unfold transformations, and supercompilation, can be applied in the verification of software systems. They also show how some program verification methods, such as model checking techniques, abstract interpretation, SAT and SMT solving, and automated theorem proving, can be used to enhance program transformation techniques, thereby making these techniques more powerful and useful in practice

    Precondition Inference via Partitioning of Initial States

    Full text link
    Precondition inference is a non-trivial task with several applications in program analysis and verification. We present a novel iterative method for automatically deriving sufficient preconditions for safety and unsafety of programs which introduces a new dimension of modularity. Each iteration maintains over-approximations of the set of \emph{safe} and \emph{unsafe} \emph{initial} states. Then we repeatedly use the current abstractions to partition the program's \emph{initial} states into those known to be safe, known to be unsafe and unknown, and construct a revised program focusing on those initial states that are not yet known to be safe or unsafe. An experimental evaluation of the method on a set of software verification benchmarks shows that it can solve problems which are not solvable using previous methods.Comment: 19 pages, 8 figure
    corecore