26 research outputs found

    New results on rewrite-based satisfiability procedures

    Full text link
    Program analysis and verification require decision procedures to reason on theories of data structures. Many problems can be reduced to the satisfiability of sets of ground literals in theory T. If a sound and complete inference system for first-order logic is guaranteed to terminate on T-satisfiability problems, any theorem-proving strategy with that system and a fair search plan is a T-satisfiability procedure. We prove termination of a rewrite-based first-order engine on the theories of records, integer offsets, integer offsets modulo and lists. We give a modularity theorem stating sufficient conditions for termination on a combinations of theories, given termination on each. The above theories, as well as others, satisfy these conditions. We introduce several sets of benchmarks on these theories and their combinations, including both parametric synthetic benchmarks to test scalability, and real-world problems to test performances on huge sets of literals. We compare the rewrite-based theorem prover E with the validity checkers CVC and CVC Lite. Contrary to the folklore that a general-purpose prover cannot compete with reasoners with built-in theories, the experiments are overall favorable to the theorem prover, showing that not only the rewriting approach is elegant and conceptually simple, but has important practical implications.Comment: To appear in the ACM Transactions on Computational Logic, 49 page

    Conflict-driven reasoning in unions of theories

    Get PDF
    Many applications of automated reasoning require decision procedures for the satisfiability of a formula in a theory given by the union of a few theories. Reasoning in a union of theories can be approached in more than one way. The equality-sharing method, also known as Nelson-Oppen scheme, combines decision procedures for the component theories. Superposition-based theorem-proving strategies unite the presentations of the theories to reason about their union. CDSAT, which stands for Conflict-Driven SATisfiability, assumes that each theory is equipped with an inference system, called theory module, and coordinates the theory modules to reason in a conflict-driven manner in the union of the theories. A theory module is an abstraction of a decision procedure, made of inference rules that may correspond to axioms of the theory. Conflict-driven means that the system maintains a representation of a candidate partial model of the formula, and performs nontrivial inferences only to explain conflicts between the candidate model and the formula, so that the conflict can be solved by updating the partial model. CDSAT provides a framework where the theory modules cooperate to build the candidate model and to explain the conflicts. This talk presents CDSAT placing it in the big picture of multi-theory reasoning and conflict-driven reasoning

    Weakly Equivalent Arrays

    Full text link
    The (extensional) theory of arrays is widely used to model systems. Hence, efficient decision procedures are needed to model check such systems. Current decision procedures for the theory of arrays saturate the read-over-write and extensionality axioms originally proposed by McCarthy. Various filters are used to limit the number of axiom instantiations while preserving completeness. We present an algorithm that lazily instantiates lemmas based on weak equivalence classes. These lemmas are easier to interpolate as they only contain existing terms. We formally define weak equivalence and show correctness of the resulting decision procedure

    What's Decidable About Sequences?

    Full text link
    We present a first-order theory of sequences with integer elements, Presburger arithmetic, and regular constraints, which can model significant properties of data structures such as arrays and lists. We give a decision procedure for the quantifier-free fragment, based on an encoding into the first-order theory of concatenation; the procedure has PSPACE complexity. The quantifier-free fragment of the theory of sequences can express properties such as sortedness and injectivity, as well as Boolean combinations of periodic and arithmetic facts relating the elements of the sequence and their positions (e.g., "for all even i's, the element at position i has value i+3 or 2i"). The resulting expressive power is orthogonal to that of the most expressive decidable logics for arrays. Some examples demonstrate that the fragment is also suitable to reason about sequence-manipulating programs within the standard framework of axiomatic semantics.Comment: Fixed a few lapses in the Mergesort exampl

    Instantiation of SMT problems modulo Integers

    Full text link
    Many decision procedures for SMT problems rely more or less implicitly on an instantiation of the axioms of the theories under consideration, and differ by making use of the additional properties of each theory, in order to increase efficiency. We present a new technique for devising complete instantiation schemes on SMT problems over a combination of linear arithmetic with another theory T. The method consists in first instantiating the arithmetic part of the formula, and then getting rid of the remaining variables in the problem by using an instantiation strategy which is complete for T. We provide examples evidencing that not only is this technique generic (in the sense that it applies to a wide range of theories) but it is also efficient, even compared to state-of-the-art instantiation schemes for specific theories.Comment: Research report, long version of our AISC 2010 pape

    A Calculus for Generating Ground Explanations

    No full text
    Full Paper: Applications II: Mathematical Structures, Explanation Generation, SecurityInternational audienceWe present a modification of the superposition calculus that is meant to generate explanations why a set of clauses is satisfiable. This process is related to abductive reasoning, and the explanations generated are clauses constructed over so-called abductive constants. We prove the correctness and completeness of the calculus in the presence of redundancy elimination rules, and develop a sufficient condition guaranteeing its termination; this sufficient condition is then used to prove that all possible explanations can be generated in finite time for several classes of clause sets, including many of interest to the SMT community. We propose a procedure that generates a set of explanations that should be useful to a human user and conclude by suggesting several extensions to this novel approach

    A Rewriting Approach to the Combination of Data Structures with Bridging Theories

    Get PDF
    International audienceWe introduce a combination method à la Nelson-Oppen to solve the satisfiability problem modulo a non-disjoint union of theories connected with bridging functions. The combination method is particularly useful to handle verification conditions involving functions defined over inductive data structures. We investigate the problem of determining the data structure theories for which this combination method is sound and complete. Our completeness proof is based on a rewriting approach where the bridging function is defined as a term rewrite system, and the data structure theory is given by a basic congruence relation. Our contribution is to introduce a class of data structure theories that are combinable with a disjoint target theory via an inductively defined bridging function. This class includes the theory of equality, the theory of absolutely free data structures, and all the theories in between. Hence, our non-disjoint combination method applies to many classical data structure theories admitting a rewrite-based satisfiability procedure

    LNCS

    Get PDF
    Extensionality axioms are common when reasoning about data collections, such as arrays and functions in program analysis, or sets in mathematics. An extensionality axiom asserts that two collections are equal if they consist of the same elements at the same indices. Using extensionality is often required to show that two collections are equal. A typical example is the set theory theorem (∀x)(∀y)x∪y = y ∪x. Interestingly, while humans have no problem with proving such set identities using extensionality, they are very hard for superposition theorem provers because of the calculi they use. In this paper we show how addition of a new inference rule, called extensionality resolution, allows first-order theorem provers to easily solve problems no modern first-order theorem prover can solve. We illustrate this by running the VAMPIRE theorem prover with extensionality resolution on a number of set theory and array problems. Extensionality resolution helps VAMPIRE to solve problems from the TPTP library of first-order problems that were never solved before by any prover
    corecore