24 research outputs found

    Interpolation Properties and SAT-based Model Checking

    Full text link
    Craig interpolation is a widespread method in verification, with important applications such as Predicate Abstraction, CounterExample Guided Abstraction Refinement and Lazy Abstraction With Interpolants. Most state-of-the-art model checking techniques based on interpolation require collections of interpolants to satisfy particular properties, to which we refer as "collectives"; they do not hold in general for all interpolation systems and have to be established for each particular system and verification environment. Nevertheless, no systematic approach exists that correlates the individual interpolation systems and compares the necessary collectives. This paper proposes a uniform framework, which encompasses (and generalizes) the most common collectives exploited in verification. We use it for a systematic study of the collectives and of the constraints they pose on propositional interpolation systems used in SAT-based model checking

    Domain-Type-Guided Refinement Selection Based on Sliced Path Prefixes

    Full text link
    Abstraction is a successful technique in software verification, and interpolation on infeasible error paths is a successful approach to automatically detect the right level of abstraction in counterexample-guided abstraction refinement. Because the interpolants have a significant influence on the quality of the abstraction, and thus, the effectiveness of the verification, an algorithm for deriving the best possible interpolants is desirable. We present an analysis-independent technique that makes it possible to extract several alternative sequences of interpolants from one given infeasible error path, if there are several reasons for infeasibility in the error path. We take as input the given infeasible error path and apply a slicing technique to obtain a set of error paths that are more abstract than the original error path but still infeasible, each for a different reason. The (more abstract) constraints of the new paths can be passed to a standard interpolation engine, in order to obtain a set of interpolant sequences, one for each new path. The analysis can then choose from this set of interpolant sequences and select the most appropriate, instead of being bound to the single interpolant sequence that the interpolation engine would normally return. For example, we can select based on domain types of variables in the interpolants, prefer to avoid loop counters, or compare with templates for potential loop invariants, and thus control what kind of information occurs in the abstraction of the program. We implemented the new algorithm in the open-source verification framework CPAchecker and show that our proof-technique-independent approach yields a significant improvement of the effectiveness and efficiency of the verification process.Comment: 10 pages, 5 figures, 1 table, 4 algorithm

    Controlled and effective interpolation

    Get PDF
    Model checking is a well established technique to verify systems, exhaustively and automatically. The state space explosion, known as the main difficulty in model checking scalability, has been successfully approached by symbolic model checking which represents programs using logic, usually at the propositional or first order theories level. Craig interpolation is one of the most successful abstraction techniques used in symbolic methods. Interpolants can be efficiently generated from proofs of unsatisfiability, and have been used as means of over-approximation to generate inductive invariants, refinement predicates, and function summaries. However, interpolation is still not fully understood. For several theories it is only possible to generate one interpolant, giving the interpolation-based application no chance of further optimization via interpolation. For the theories that have interpolation systems that are able to generate different interpolants, it is not understood what makes one interpolant better than another, and how to generate the most suitable ones for a particular verification task. The goal of this thesis is to address the problems of how to generate multiple interpolants for theories that still lack this flexibility in their interpolation algorithms, and how to aim at good interpolants. This thesis extends the state-of-the-art by introducing novel interpolation frameworks for different theories. For propositional logic, this work provides a thorough theoretical analysis showing which properties are desirable in a labeling function for the Labeled Interpolation Systems framework (LIS). The Proof-Sensitive labeling function is presented, and we prove that it generates interpolants with the smallest number of Boolean connectives in the entire LIS framework. Two variants that aim at controlling the logical strength of propositional interpolants while maintaining a small size are given. The new interpolation algorithms are compared to previous ones from the literature in different model checking settings, showing that they consistently lead to a better overall verification performance. The Equalities and Uninterpreted Functions (EUF)-interpolation system, presented in this thesis, is a duality-based interpolation framework capable of generating multiple interpolants for a single proof of unsatisfiability, and provides control over the logical strength of the interpolants it generates using labeling functions. The labeling functions can be theoretically compared with respect to their strength, and we prove that two of them generate the interpolants with the smallest number of equalities. Our experiments follow the theory, showing that the generated interpolants indeed have different logical strength. We combine propositional and EUF interpolation in a model checking setting, and show that the strength of the interpolation algorithms for different theories has to be aligned in order to generate smaller interpolants. This work also introduces the Linear Real Arithmetic (LRA)-interpolation system, an interpolation framework for LRA. The framework is able to generate infinitely many interpolants of different logical strength using the duality of interpolants. The strength of the LRA interpolants can be controlled by a normalized strength factor, which makes it straightforward for an interpolationbased application to choose the level of strength it wants for the interpolants. Our experiments with the LRA-interpolation system and a model checker show that it is very important for the application to be able to fine tune the strength of the LRA interpolants in order to achieve optimal performance. The interpolation frameworks were implemented and form the interpolation module in OpenSMT2, an open source efficient SMT solver. OpenSMT2 has been integrated to the propositional interpolation-based model checkers FunFrog and eVolCheck, and to the first order interpolation-based model checkerHiFrog. This thesis presents real life model checking experiments using the novel interpolation frameworks and the tools aforementioned, showing the viability and strengths of the techniques

    Instantiation-Based Interpolation for Quantified Formulae

    Get PDF
    Interpolation has proven highly effective in program analysis and verification, e. g., to derive invariants or new abstractions. While interpolation for quantifier free formulae is understood quite well, it turns out to be challenging in the presence of quantifiers. We present in this talk modifications to instantiation based SMT-solvers and to McMillan\u27s interpolation algorithm in order to compute quantified interpolants

    Quantifier-Free Interpolation of a Theory of Arrays

    Get PDF
    The use of interpolants in model checking is becoming an enabling technology to allow fast and robust verification of hardware and software. The application of encodings based on the theory of arrays, however, is limited by the impossibility of deriving quantifier- free interpolants in general. In this paper, we show that it is possible to obtain quantifier-free interpolants for a Skolemized version of the extensional theory of arrays. We prove this in two ways: (1) non-constructively, by using the model theoretic notion of amalgamation, which is known to be equivalent to admit quantifier-free interpolation for universal theories; and (2) constructively, by designing an interpolating procedure, based on solving equations between array updates. (Interestingly, rewriting techniques are used in the key steps of the solver and its proof of correctness.) To the best of our knowledge, this is the first successful attempt of computing quantifier- free interpolants for a variant of the theory of arrays with extensionality

    Finding Kernels in Non-Linear Data-Driven CHC Solving

    Get PDF
    Program verification has seen a lot of progress, but its still unable to automatically find proofs for industry programs. This paper builds on data-driven approaches from previous work [11] to provide a more robust automatic prover for programs with non-linear loop invariants. It does so by attempting to find the correct kernel for the relation that makes the invariant linear. This is an easy addition to existing systems and can be used with any data-driven approach, allowing it to be easily implemented on top of them. By finding a suitable kernel, many difficult non-linear invariants are easily found.Undergraduat

    Uniform Interpolants in EUF: Algorithms using DAG-representations

    Get PDF
    The concept of uniform interpolant for a quantifier-free formula from a given formula with a list of symbols, while well-known in the logic literature, has been unknown to the formal methods and automated reasoning community for a long time. This concept is precisely defined. Two algorithms for computing quantifier-free uniform interpolants in the theory of equality over uninterpreted symbols (EUF) endowed with a list of symbols to be eliminated are proposed. The first algorithm is non-deterministic and generates a uniform interpolant expressed as a disjunction of conjunctions of literals, whereas the second algorithm gives a compact representation of a uniform interpolant as a conjunction of Horn clauses. Both algorithms exploit efficient dedicated DAG representations of terms. Correctness and completeness proofs are supplied, using arguments combining rewrite techniques with model theory
    corecore