1,805 research outputs found

    Spatial Interpolants

    Full text link
    We propose Splinter, a new technique for proving properties of heap-manipulating programs that marries (1) a new separation logic-based analysis for heap reasoning with (2) an interpolation-based technique for refining heap-shape invariants with data invariants. Splinter is property directed, precise, and produces counterexample traces when a property does not hold. Using the novel notion of spatial interpolants modulo theories, Splinter can infer complex invariants over general recursive predicates, e.g., of the form all elements in a linked list are even or a binary tree is sorted. Furthermore, we treat interpolation as a black box, which gives us the freedom to encode data manipulation in any suitable theory for a given program (e.g., bit vectors, arrays, or linear arithmetic), so that our technique immediately benefits from any future advances in SMT solving and interpolation.Comment: Short version published in ESOP 201

    Combining Forward and Backward Abstract Interpretation of Horn Clauses

    Full text link
    Alternation of forward and backward analyses is a standard technique in abstract interpretation of programs, which is in particular useful when we wish to prove unreachability of some undesired program states. The current state-of-the-art technique for combining forward (bottom-up, in logic programming terms) and backward (top-down) abstract interpretation of Horn clauses is query-answer transformation. It transforms a system of Horn clauses, such that standard forward analysis can propagate constraints both forward, and backward from a goal. Query-answer transformation is effective, but has issues that we wish to address. For that, we introduce a new backward collecting semantics, which is suitable for alternating forward and backward abstract interpretation of Horn clauses. We show how the alternation can be used to prove unreachability of the goal and how every subsequent run of an analysis yields a refined model of the system. Experimentally, we observe that combining forward and backward analyses is important for analysing systems that encode questions about reachability in C programs. In particular, the combination that follows our new semantics improves the precision of our own abstract interpreter, including when compared to a forward analysis of a query-answer-transformed system.Comment: Francesco Ranzato. 24th International Static Analysis Symposium (SAS), Aug 2017, New York City, United States. Springer, Static Analysi

    Learning to Understand by Evolving Theories

    Full text link
    In this paper, we describe an approach that enables an autonomous system to infer the semantics of a command (i.e. a symbol sequence representing an action) in terms of the relations between changes in the observations and the action instances. We present a method of how to induce a theory (i.e. a semantic description) of the meaning of a command in terms of a minimal set of background knowledge. The only thing we have is a sequence of observations from which we extract what kinds of effects were caused by performing the command. This way, we yield a description of the semantics of the action and, hence, a definition.Comment: KRR Workshop at ICLP 201

    Tree automata-based refinement with application to Horn clause verification

    Get PDF

    Convex polyhedral abstractions, specialisation and property-based predicate splitting in Horn clause verification

    Get PDF
    We present an approach to constrained Horn clause (CHC) verification combining three techniques: abstract interpretation over a domain of convex polyhedra, specialisation of the constraints in CHCs using abstract interpretation of query-answer transformed clauses, and refinement by splitting predicates. The purpose of the work is to investigate how analysis and transformation tools developed for constraint logic programs (CLP) can be applied to the Horn clause verification problem. Abstract interpretation over convex polyhedra is capable of deriving sophisticated invariants and when used in conjunction with specialisation for propagating constraints it can frequently solve challenging verification problems. This is a contribution in itself, but refinement is needed when it fails, and the question of how to refine convex polyhedral analyses has not been studied much. We present a refinement technique based on interpolants derived from a counterexample trace; these are used to drive a property-based specialisation that splits predicates, leading in turn to more precise convex polyhedral analyses. The process of specialisation, analysis and splitting can be repeated, in a manner similar to the CEGAR and iterative specialisation approaches.Comment: In Proceedings HCVS 2014, arXiv:1412.082

    Interpolant tree automata and their application in Horn clause verification

    Get PDF
    This paper investigates the combination of abstract interpretation over the domain of convex polyhedra with interpolant tree automata, in an abstraction-refinement scheme for Horn clause verification. These techniques have been previously applied separately, but are combined in a new way in this paper. The role of an interpolant tree automaton is to provide a generalisation of a spurious counterexample during refinement, capturing a possibly infinite set of spurious counterexample traces. In our approach these traces are then eliminated using a transformation of the Horn clauses. We compare this approach with two other methods; one of them uses interpolant tree automata in an algorithm for trace abstraction and refinement, while the other uses abstract interpretation over the domain of convex polyhedra without the generalisation step. Evaluation of the results of experiments on a number of Horn clause verification problems indicates that the combination of interpolant tree automaton with abstract interpretation gives some increase in the power of the verification tool, while sometimes incurring a performance overhead.Comment: In Proceedings VPT 2016, arXiv:1607.0183

    Precondition Inference via Partitioning of Initial States

    Full text link
    Precondition inference is a non-trivial task with several applications in program analysis and verification. We present a novel iterative method for automatically deriving sufficient preconditions for safety and unsafety of programs which introduces a new dimension of modularity. Each iteration maintains over-approximations of the set of \emph{safe} and \emph{unsafe} \emph{initial} states. Then we repeatedly use the current abstractions to partition the program's \emph{initial} states into those known to be safe, known to be unsafe and unknown, and construct a revised program focusing on those initial states that are not yet known to be safe or unsafe. An experimental evaluation of the method on a set of software verification benchmarks shows that it can solve problems which are not solvable using previous methods.Comment: 19 pages, 8 figure

    Symbolic methodology for numeric data mining

    Get PDF
    Currently statistical and artificial neural network methods dominate in data mining applications. Alternative relational (symbolic) data mining methods have shown their effectiveness in robotics, drug design, and other areas. Neural networks and decision tree methods have serious limitations in capturing relations that may have a variety of forms. Learning systems based on symbolic first-order logic (FOL) representations capture relations naturally. The learned regularities are understandable directly in domain terms that help to build a domain theory. This paper describes relational data mining methodology and develops it further for numeric data such as financial and spatial data. This includes (1) comparing the attribute-value representation with the relational representation, (2) defining a new concept of joint relational representations, (3) a process of their use, and the Discovery algorithm. This methodology handles uniformly the numerical and interval forecasting tasks as well as classification tasks. It is shown that Relational Data Mining (RDM) can handle multiple constrains, initial rules and background knowledge very naturally to reduce the search space in contrast with attribute-based data mining. Theoretical concepts are illustrated with examples from financial and image processing domains
    corecore