417 research outputs found
On the ground reducibility problem for word rewriting systems with variables
Projet EURECAGiven a word rewriting system with variables R and a word with variables w the question we are interested in is whether all the instances of w obtained by substituting its variables by non-empty words are reducible by R. We prove this problem to be generally undecidable even for a very simple word w, namely axa where a is a letter and x a variable. When R is left-linear, the question is decidable for arbitrary (linear or non-linear)
Testing for the ground (co-)reducibility property in term-rewriting systems
AbstractGiven a term-rewriting system R, a term t is ground-reducible by R if every ground instance tÏ of it is R-reducible. A pair (t, s) of terms is ground-co-reducible by R if every ground instance (tÏ, sÏ] of it for which tÏ and sÏ are distinct is R-reducible. Ground (co-)reducibility has been proved to be the fundamental tool for mechanizing inductive proofs, together with the Knuth-Bendix completion procedure presented by Jouannaud and Kounalis (1986, 1989).Jouannaud and Kounalis (1986, 1989) also presented an algorithm for testing ground reducibility which is tractable in practical cases but restricted to left-linear term-rewriting systems. The solution of the ground (co-)reducibility problem, for the general case, turned out to be surprisingly complicated. Decidability of ground reducibility for arbitrary term-rewriting systems has been first proved by Plaisted (1985) and independently by Kapur (1987). However, the algorithms of Plaisted and Kapur amount to intractable computation, even in very simple cases.We present here a new algorithm for the general case which outperforms the algorithms of Plaisted and Kapur and even our previous algorithm in case of left-linear term-rewriting systems. We then show how to adapt it to check for ground co-reducibility
Higher-Order Termination: from Kruskal to Computability
Termination is a major question in both logic and computer science. In logic,
termination is at the heart of proof theory where it is usually called strong
normalization (of cut elimination). In computer science, termination has always
been an important issue for showing programs correct. In the early days of
logic, strong normalization was usually shown by assigning ordinals to
expressions in such a way that eliminating a cut would yield an expression with
a smaller ordinal. In the early days of verification, computer scientists used
similar ideas, interpreting the arguments of a program call by a natural
number, such as their size. Showing the size of the arguments to decrease for
each recursive call gives a termination proof of the program, which is however
rather weak since it can only yield quite small ordinals. In the sixties, Tait
invented a new method for showing cut elimination of natural deduction, based
on a predicate over the set of terms, such that the membership of an expression
to the predicate implied the strong normalization property for that expression.
The predicate being defined by induction on types, or even as a fixpoint, this
method could yield much larger ordinals. Later generalized by Girard under the
name of reducibility or computability candidates, it showed very effective in
proving the strong normalization property of typed lambda-calculi..
Ordering constraints on trees
We survey recent results about ordering constraints on trees and discuss their applications. Our main interest lies in the family of recursive path orderings which enjoy the properties of being total, well-founded and compatible with the tree constructors. The paper includes some new results, in particular the undecidability of the theory of lexicographic path orderings in case of a non-unary signature
Inductive Theorem Proving Using Refined Unfailing Completion Techniques
We present a brief overview on completion based inductive theorem proving techniques, point out the key concepts for the underlying "proof by consistency" - paradigm and isolate an abstract description of what is necessary for an algorithmic realization of such methods.
In particular, we give several versions of proof orderings, which - under certain conditions - are well-suited for that purpose. Together with corresponding notions of (positive and negative) covering sets we get abstract "positive" and "negative" characterizations of inductive validity. As a consequence we can generalize known criteria for inductive validity, even for the cases where some of the conjectures may not be orientable or where the base system is terminating but not necessarily ground conïŹuent.
Furthermore we consider several reïŹnements and optimizations of completion based inductive theorem proving techniques. In particular, sufïŹcient criteria for being a covering set including restrictions of critical pairs (and the usage of non-equational inductive knowledge) are discussed.
Moreover a couple of lemma generation methods are brieïŹy summarized and classiïŹed. A new techniques of save generalization is particularly interesting, since it provides means for syntactic generalizations, i.e. simpliïŹcations, of conjectures without loosing semantic equivalence.
Finally we present the main features and characteristics of UNICOM, an inductive theorem prover with reïŹned unfailing completion techniques and built on top of TRSPEC, a term rewriting based system for investigating algebraic speciïŹcations
On (in)tractability of OBDA with OWL 2 QL
We show that, although conjunctive queries over OWL 2 QL ontologies are reducible to database queries, no algorithm can construct such a reduction in polynomial time without changing the data. On the other hand, we give a polynomial reduction for OWL2QL ontologies without role inclusions
The HOM problem is EXPTIME-complete
We define a new class of tree automata with constraints and prove decidability of the emptiness problem for this class in exponential time. As a consequence, we obtain several EXPTIME-completeness results for problems on images of regular tree languages under tree homomorphisms, like set inclusion, regularity (HOM problem), and finiteness of set difference. Our result also has implications in term rewriting, since the set of reducible terms of a term rewrite system can be described as the image of a tree homomorphism. In particular, we prove that inclusion of sets of normal forms of term rewrite systems can be decided in exponential time. Analogous consequences arise in the context of XML typechecking, since types are defined by tree automata and some type transformations are homomorphic.Peer ReviewedPostprint (published version
Computability in constructive type theory
We give a formalised and machine-checked account of computability theory in the Calculus of Inductive Constructions (CIC), the constructive type theory underlying the Coq proof assistant. We first develop synthetic computability theory, pioneered by Richman, Bridges, and Bauer, where one treats all functions as computable, eliminating the need for a model of computation. We assume a novel parametric axiom for synthetic computability and give proofs of results like Riceâs theorem, the Myhill isomorphism theorem, and the existence of Postâs simple and hypersimple predicates relying on no other axioms such as Markovâs principle or choice axioms. As a second step, we introduce models of computation. We give a concise overview of definitions of various standard models and contribute machine-checked simulation proofs, posing a non-trivial engineering effort. We identify a notion of synthetic undecidability relative to a fixed halting problem, allowing axiom-free machine-checked proofs of undecidability. We contribute such undecidability proofs for the historical foundational problems of computability theory which require the identification of invariants left out in the literature and now form the basis of the Coq Library of Undecidability Proofs. We then identify the weak call-by-value λ-calculus L as sweet spot for programming in a model of computation. We introduce a certifying extraction framework and analyse an axiom stating that every function of type â â â is L-computable.Wir behandeln eine formalisierte und maschinengeprĂŒfte Betrachtung von Berechenbarkeitstheorie im Calculus of Inductive Constructions (CIC), der konstruktiven Typtheorie die dem Beweisassistenten Coq zugrunde liegt. Wir entwickeln erst synthetische Berechenbarkeitstheorie, vorbereitet durch die Arbeit von Richman, Bridges und Bauer, wobei alle Funktionen als berechenbar behandelt werden, ohne Notwendigkeit eines Berechnungsmodells. Wir nehmen ein neues, parametrisches Axiom fĂŒr synthetische Berechenbarkeit an und beweisen Resultate wie das Theorem von Rice, das Isomorphismus Theorem von Myhill und die Existenz von Postâs simplen und hypersimplen PrĂ€dikaten ohne Annahme von anderen Axiomen wie Markovâs Prinzip oder Auswahlaxiomen. Als zweiten Schritt fĂŒhren wir Berechnungsmodelle ein. Wir geben einen kompakten Ăberblick ĂŒber die Definition von verschiedenen Berechnungsmodellen und erklĂ€ren maschinengeprĂŒfte Simulationsbeweise zwischen diesen Modellen, welche einen hohen Konstruktionsaufwand beinhalten. Wir identifizieren einen Begriff von synthetischer Unentscheidbarkeit relativ zu einem fixierten Halteproblem welcher axiomenfreie maschinengeprĂŒfte Unentscheidbarkeitsbeweise erlaubt. Wir erklĂ€ren solche Beweise fĂŒr die historisch grundlegenden Probleme der Berechenbarkeitstheorie, die das Identifizieren von Invarianten die normalerweise in der Literatur ausgelassen werden benötigen und nun die Basis der Coq Library of Undecidability Proofs bilden. Wir identifizieren dann den call-by-value λ-KalkĂŒl L als sweet spot fĂŒr die Programmierung in einem Berechnungsmodell. Wir fĂŒhren ein zertifizierendes Extraktionsframework ein und analysieren ein Axiom welches postuliert dass jede Funktion vom Typ NâN L-berechenbar ist
- âŠ