8 research outputs found

    Combining decision procedures for the reals

    Full text link
    We address the general problem of determining the validity of boolean combinations of equalities and inequalities between real-valued expressions. In particular, we consider methods of establishing such assertions using only restricted forms of distributivity. At the same time, we explore ways in which "local" decision or heuristic procedures for fragments of the theory of the reals can be amalgamated into global ones. Let Tadd[Q] be the first-order theory of the real numbers in the language of ordered groups, with negation, a constant 1, and function symbols for multiplication by rational constants. Let Tmult[Q] be the analogous theory for the multiplicative structure, and let T[Q] be the union of the two. We show that although T[Q] is undecidable, the universal fragment of T[Q] is decidable. We also show that terms of T[Q]can fruitfully be put in a normal form. We prove analogous results for theories in which Q is replaced, more generally, by suitable subfields F of the reals. Finally, we consider practical methods of establishing quantifier-free validities that approximate our (impractical) decidability results.Comment: Will appear in Logical Methods in Computer Scienc

    New results on rewrite-based satisfiability procedures

    Full text link
    Program analysis and verification require decision procedures to reason on theories of data structures. Many problems can be reduced to the satisfiability of sets of ground literals in theory T. If a sound and complete inference system for first-order logic is guaranteed to terminate on T-satisfiability problems, any theorem-proving strategy with that system and a fair search plan is a T-satisfiability procedure. We prove termination of a rewrite-based first-order engine on the theories of records, integer offsets, integer offsets modulo and lists. We give a modularity theorem stating sufficient conditions for termination on a combinations of theories, given termination on each. The above theories, as well as others, satisfy these conditions. We introduce several sets of benchmarks on these theories and their combinations, including both parametric synthetic benchmarks to test scalability, and real-world problems to test performances on huge sets of literals. We compare the rewrite-based theorem prover E with the validity checkers CVC and CVC Lite. Contrary to the folklore that a general-purpose prover cannot compete with reasoners with built-in theories, the experiments are overall favorable to the theorem prover, showing that not only the rewriting approach is elegant and conceptually simple, but has important practical implications.Comment: To appear in the ACM Transactions on Computational Logic, 49 page

    Combination of convex theories: Modularity, deduction completeness, and explanation

    Get PDF
    AbstractDecision procedures are key components of theorem provers and constraint satisfaction systems. Their modular combination is of prime interest for building efficient systems, but their effective use is often limited by poor interface capabilities, when such procedures only provide a simple “sat/unsat” answer. In this paper, we develop a framework to design cooperation schemas between such procedures while maintaining modularity of their interfaces. First, we use the framework to specify and prove the correctness of classic combination schemas by Nelson–Oppen and Shostak. Second, we introduce the concept of deduction complete satisfiability procedures, we show how to build them for large classes of theories, then we provide a schema to modularly combine them. Third, we consider the problem of modularly constructing explanations for combinations by re-using available proof-producing procedures for the component theories

    Decidable fragments of first-order logic and of first-order linear arithmetic with uninterpreted predicates

    Get PDF
    First-order logic is one of the most prominent formalisms in computer science and mathematics. Since there is no algorithm capable of solving its satisfiability problem, first-order logic is said to be undecidable. The classical decision problem is the quest for a delineation between the decidable and the undecidable parts. The results presented in this thesis shed more light on the boundary and open new perspectives on the landscape of known decidable fragments. In the first part we focus on the new concept of separateness of variables and explore its applicability to the classical decision problem and beyond. Two disjoint sets of first-order variables are separated in a given formula if none of its atoms contains variables from both sets. This notion facilitates the definition of decidable extensions of many well-known decidable first-order fragments. We demonstrate this for several prefix fragments, several guarded fragments, the two-variable fragment, and for the fluted fragment. Although the extensions exhibit the same expressive power as the respective originals, certain logical properties can be expressed much more succinctly. In two cases the succinctness gap cannot be bounded using elementary functions. This fact already hints at computationally hard satisfiability problems. Indeed, we derive non-elementary lower bounds for the separated fragment, an extension of the Bernays-Schönfinkel-Ramsey fragment (E*A*-prefix sentences). On the semantic level, separateness of quantified variables may lead to weaker dependences than we encounter in general. We investigate this property in the context of model-checking games. The focus of the second part of the thesis is on linear arithmetic with uninterpreted predicates. Two novel decidable fragments are presented, both based on the Bernays-Schönfinkel-Ramsey fragment. On the negative side, we identify several small fragments of the language for which satisfiability is undecidable.Untersuchungen der Logik erster Stufe blicken auf eine lange Tradition zurück. Es ist allgemein bekannt, dass das zugehörige Erfüllbarkeitsproblem im Allgemeinen nicht algorithmisch gelöst werden kann - man spricht daher von einer unentscheidbaren Logik. Diese Beobachtung wirft ein Schlaglicht auf die prinzipiellen Grenzen der Fähigkeiten von Computern im Allgemeinen aber auch des automatischen Schließens im Besonderen. Das Hilbertsche Entscheidungsproblem wird heute als die Erforschung der Grenze zwischen entscheidbaren und unentscheidbaren Teilen der Logik erster Stufe verstanden, wobei die untersuchten Fragmente der Logik mithilfe klar zu erfassender und berechenbarer syntaktischer Eigenschaften beschrieben werden. Viele Forscher haben bereits zu dieser Untersuchung beigetragen und zahlreiche entscheidbare und unentscheidbare Fragmente entdeckt und erforscht. Die vorliegende Dissertation setzt diese Tradition mit einer Reihe vornehmlich positiver Resultate fort und eröffnet neue Blickwinkel auf eine Reihe von Fragmenten, die im Laufe der letzten einhundert Jahre untersucht wurden. Im ersten Teil der Arbeit steht das syntaktische Konzept der Separiertheit von Variablen im Mittelpunkt, und dessen Anwendbarkeit auf das Entscheidungsproblem und darüber hinaus wird erforscht. Zwei Mengen von Individuenvariablen gelten bezüglich einer gegebenen Formel als separiert, falls in jedem Atom der Formel die Variablen aus höchstens einer der beiden Mengen vorkommen. Mithilfe dieses leicht verständlichen Begriffs lassen sich viele wohlbekannte entscheidbare Fragmente der Logik erster Stufe zu größeren Klassen von Formeln erweitern, die dennoch entscheidbar sind. Dieser Ansatz wird für neun Fragmente im Detail dargelegt, darunter mehrere Präfix-Fragmente, das Zwei-Variablen-Fragment und sogenannte "guarded" und " uted" Fragmente. Dabei stellt sich heraus, dass alle erweiterten Fragmente ebenfalls das monadische Fragment erster Stufe ohne Gleichheit enthalten. Obwohl die erweiterte Syntax in den betrachteten Fällen nicht mit einer erhöhten Ausdrucksstärke einhergeht, können bestimmte Zusammenhänge mithilfe der erweiterten Syntax deutlich kürzer formuliert werden. Zumindest in zwei Fällen ist diese Diskrepanz nicht durch eine elementare Funktion zu beschränken. Dies liefert einen ersten Hinweis darauf, dass die algorithmische Lösung des Erfüllbarkeitsproblems für die erweiterten Fragmente mit sehr hohem Rechenaufwand verbunden ist. Tatsächlich wird eine nicht-elementare untere Schranke für den entsprechenden Zeitbedarf beim sogenannten separierten Fragment, einer Erweiterung des bekannten Bernays-Schönfinkel-Ramsey-Fragments, abgeleitet. Darüber hinaus wird der Ein uss der Separiertheit von Individuenvariablen auf der semantischen Ebene untersucht, wo Abhängigkeiten zwischen quantifizierten Variablen durch deren Separiertheit stark abgeschwächt werden können. Für die genauere formale Betrachtung solcher als schwach bezeichneten Abhängigkeiten wird auf sogenannte Hintikka-Spiele zurückgegriffen. Den Schwerpunkt des zweiten Teils der vorliegenden Arbeit bildet das Entscheidungsproblem für die lineare Arithmetik über den rationalen Zahlen in Verbindung mit uninterpretierten Prädikaten. Es werden zwei bislang unbekannte entscheidbare Fragmente dieser Sprache vorgestellt, die beide auf dem Bernays-Schönfinkel-Ramsey-Fragment aufbauen. Ferner werden neue negative Resultate entwickelt und mehrere unentscheidbare Fragmente vorgestellt, die lediglich einen sehr eingeschränkten Teil der Sprache benötigen

    Strategies for combining decision procedures

    No full text
    Abstract. Implementing efficient algorithms for combining decision procedures has been a challenge and their correctness precarious. In this paper we describe an inference system that has the classical Nelson-Oppen procedure at its core and includes several optimizations: variable abstraction with sharing, canonization of terms at the theory level, and Shostak’s streamlined generation of new equalities for theories with solvers. The transitions of our system are fine-grained enough to model most of the mechanisms currently used in designing combination procedures. In particular, with a simple language of regular expressions we are able to describe several combination algorithms as strategies for our inference system, from the basic Nelson-Oppen to the very highly optimized one recently given by Shankar and Rueß. Presenting the basic system at a high level of generality and nondeterminism allows transparent correctness proofs that can be extended in a modular fashion whenever a new feature is introduced in the system. Similarly, the correctness proof of any new strategy requires only minimal additional proof effort.

    Strategies for Combining Decision Procedures

    No full text
    Implementing efficient algorithms for combining decision procedures has been a challenge and their correctness precarious. In this paper we describe an inference system that has the classical Nelson-Oppen procedure at its core and includes several optimizations: variable abstraction with sharing, canonization of terms at the theory level, and Shostak's streamlined generation of new equalities for theories with solvers. The transitions of our system are fine-grained enough to model most of the mechanisms currently used in designing combination procedures. In particular, with a simple language of regular expressions we are able to describe several combination algorithms as strategies for our inference system, from the basic Nelson-Oppen to the very highly optimized one recently given by Shankar and RueĂź. Presenting the basic system at a high level of generality and nondeterminism allows transparent correctness proofs that can be extended in a modular fashion when new features are introduced in the system. Similarly, the correctness proof of any new strategy requires only minimal additional proof effort
    corecore