76 research outputs found

    Abstract Canonical Inference

    Full text link
    An abstract framework of canonical inference is used to explore how different proof orderings induce different variants of saturation and completeness. Notions like completion, paramodulation, saturation, redundancy elimination, and rewrite-system reduction are connected to proof orderings. Fairness of deductive mechanisms is defined in terms of proof orderings, distinguishing between (ordinary) "fairness," which yields completeness, and "uniform fairness," which yields saturation.Comment: 28 pages, no figures, to appear in ACM Trans. on Computational Logi

    Set of support, demodulation, paramodulation: a historical perspective

    Get PDF
    This article is a tribute to the scientific legacy of automated reasoning pioneer and JAR founder Lawrence T. (Larry) Wos. Larry's main technical contributions were the set-of-support strategy for resolution theorem proving, and the demodulation and paramodulation inference rules for building equality into resolution. Starting from the original definitions of these concepts in Larry's papers, this survey traces their evolution, unearthing the often forgotten trails that connect Larry's original definitions to those that became standard in the field

    New results on rewrite-based satisfiability procedures

    Full text link
    Program analysis and verification require decision procedures to reason on theories of data structures. Many problems can be reduced to the satisfiability of sets of ground literals in theory T. If a sound and complete inference system for first-order logic is guaranteed to terminate on T-satisfiability problems, any theorem-proving strategy with that system and a fair search plan is a T-satisfiability procedure. We prove termination of a rewrite-based first-order engine on the theories of records, integer offsets, integer offsets modulo and lists. We give a modularity theorem stating sufficient conditions for termination on a combinations of theories, given termination on each. The above theories, as well as others, satisfy these conditions. We introduce several sets of benchmarks on these theories and their combinations, including both parametric synthetic benchmarks to test scalability, and real-world problems to test performances on huge sets of literals. We compare the rewrite-based theorem prover E with the validity checkers CVC and CVC Lite. Contrary to the folklore that a general-purpose prover cannot compete with reasoners with built-in theories, the experiments are overall favorable to the theorem prover, showing that not only the rewriting approach is elegant and conceptually simple, but has important practical implications.Comment: To appear in the ACM Transactions on Computational Logic, 49 page

    Polygraphs: From Rewriting to Higher Categories

    Full text link
    Polygraphs are a higher-dimensional generalization of the notion of directed graph. Based on those as unifying concept, this monograph on polygraphs revisits the theory of rewriting in the context of strict higher categories, adopting the abstract point of view offered by homotopical algebra. The first half explores the theory of polygraphs in low dimensions and its applications to the computation of the coherence of algebraic structures. It is meant to be progressive, with little requirements on the background of the reader, apart from basic category theory, and is illustrated with algorithmic computations on algebraic structures. The second half introduces and studies the general notion of n-polygraph, dealing with the homotopy theory of those. It constructs the folk model structure on the category of strict higher categories and exhibits polygraphs as cofibrant objects. This allows extending to higher dimensional structures the coherence results developed in the first half

    A complete transformational toolkit for compilers

    Get PDF
    In an earlier paper, one of the present authors presented a preliminary account of an equational logic called PIM. PIM is intended to function as a 'transformational toolkit' to be used by compilers and analysis tools for imperative languages, and has been applied to such problems as program slicing, symbolic evaluation, conditional constant propagation, and dependence analysis. PIM consists of the untyped lambda calculus extended with an algebraic rewriting system that characterizes the behavior of lazy stores and generalized conditionals. A major question left open in the earlier paper was whether there existed a complete equational axiomatization of PIM's semantics. In this paper, we answer this question in the affirmative for PIM's core algebraic component, PIMt, under the assumption of certain reasonable restrictions on term formation. We systematically derive the complete PIM logic as the culmination of a sequence of increasingly powerful equational systems starting from a straightforward 'interpreter' for closed PIM terms

    Term rewriting systems

    Get PDF

    Unification Procedures in Automated Deduction Methods Based on Matings: A Survey

    Get PDF
    Unification procedures arising in methods for automated theorem proving based on matings are surveyed. We begin by reviewing some fundamentals of automated deduction, including the Skolem form and the Skolem-Herbrand-Gödel theorem. Next, the method of matings for first-order languages without equality due to Andrews and Bibel is presented. Standard unification is described in terms of transformations on systems (following the approach of Martelli and Montanari, anticipated by Herbrand). Some fast unification algorithms are also sketched, in particular, a unification closure algorithm inspired by Paterson and Wegman\u27s method. The method of matings is then extended to languages with equality. This extention leads naturally to a generalization of standard unification called rigid E-unification (due to Gallier, Narendran, Plaisted, and Snyder). The main properties of rigid E-unification, decidability, NP-completeness, and finiteness of complete sets, are discussed

    Acta Cybernetica : Volume 22. Number 2.

    Get PDF

    Quantifier-Free Interpolation of a Theory of Arrays

    Get PDF
    The use of interpolants in model checking is becoming an enabling technology to allow fast and robust verification of hardware and software. The application of encodings based on the theory of arrays, however, is limited by the impossibility of deriving quantifier- free interpolants in general. In this paper, we show that it is possible to obtain quantifier-free interpolants for a Skolemized version of the extensional theory of arrays. We prove this in two ways: (1) non-constructively, by using the model theoretic notion of amalgamation, which is known to be equivalent to admit quantifier-free interpolation for universal theories; and (2) constructively, by designing an interpolating procedure, based on solving equations between array updates. (Interestingly, rewriting techniques are used in the key steps of the solver and its proof of correctness.) To the best of our knowledge, this is the first successful attempt of computing quantifier- free interpolants for a variant of the theory of arrays with extensionality
    • …
    corecore