1,373 research outputs found

    Synthesis of sup-interpretations: a survey

    Get PDF
    In this paper, we survey the complexity of distinct methods that allow the programmer to synthesize a sup-interpretation, a function providing an upper- bound on the size of the output values computed by a program. It consists in a static space analysis tool without consideration of the time consumption. Although clearly related, sup-interpretation is independent from termination since it only provides an upper bound on the terminating computations. First, we study some undecidable properties of sup-interpretations from a theoretical point of view. Next, we fix term rewriting systems as our computational model and we show that a sup-interpretation can be obtained through the use of a well-known termination technique, the polynomial interpretations. The drawback is that such a method only applies to total functions (strongly normalizing programs). To overcome this problem we also study sup-interpretations through the notion of quasi-interpretation. Quasi-interpretations also suffer from a drawback that lies in the subterm property. This property drastically restricts the shape of the considered functions. Again we overcome this problem by introducing a new notion of interpretations mainly based on the dependency pairs method. We study the decidability and complexity of the sup-interpretation synthesis problem for all these three tools over sets of polynomials. Finally, we take benefit of some previous works on termination and runtime complexity to infer sup-interpretations.Comment: (2012

    Decreasing Diagrams for Confluence and Commutation

    Full text link
    Like termination, confluence is a central property of rewrite systems. Unlike for termination, however, there exists no known complexity hierarchy for confluence. In this paper we investigate whether the decreasing diagrams technique can be used to obtain such a hierarchy. The decreasing diagrams technique is one of the strongest and most versatile methods for proving confluence of abstract rewrite systems. It is complete for countable systems, and it has many well-known confluence criteria as corollaries. So what makes decreasing diagrams so powerful? In contrast to other confluence techniques, decreasing diagrams employ a labelling of the steps with labels from a well-founded order in order to conclude confluence of the underlying unlabelled relation. Hence it is natural to ask how the size of the label set influences the strength of the technique. In particular, what class of abstract rewrite systems can be proven confluent using decreasing diagrams restricted to 1 label, 2 labels, 3 labels, and so on? Surprisingly, we find that two labels suffice for proving confluence for every abstract rewrite system having the cofinality property, thus in particular for every confluent, countable system. Secondly, we show that this result stands in sharp contrast to the situation for commutation of rewrite relations, where the hierarchy does not collapse. Thirdly, investigating the possibility of a confluence hierarchy, we determine the first-order (non-)definability of the notion of confluence and related properties, using techniques from finite model theory. We find that in particular Hanf's theorem is fruitful for elegant proofs of undefinability of properties of abstract rewrite systems

    Computability in constructive type theory

    Get PDF
    We give a formalised and machine-checked account of computability theory in the Calculus of Inductive Constructions (CIC), the constructive type theory underlying the Coq proof assistant. We first develop synthetic computability theory, pioneered by Richman, Bridges, and Bauer, where one treats all functions as computable, eliminating the need for a model of computation. We assume a novel parametric axiom for synthetic computability and give proofs of results like Rice’s theorem, the Myhill isomorphism theorem, and the existence of Post’s simple and hypersimple predicates relying on no other axioms such as Markov’s principle or choice axioms. As a second step, we introduce models of computation. We give a concise overview of definitions of various standard models and contribute machine-checked simulation proofs, posing a non-trivial engineering effort. We identify a notion of synthetic undecidability relative to a fixed halting problem, allowing axiom-free machine-checked proofs of undecidability. We contribute such undecidability proofs for the historical foundational problems of computability theory which require the identification of invariants left out in the literature and now form the basis of the Coq Library of Undecidability Proofs. We then identify the weak call-by-value λ-calculus L as sweet spot for programming in a model of computation. We introduce a certifying extraction framework and analyse an axiom stating that every function of type ℕ → ℕ is L-computable.Wir behandeln eine formalisierte und maschinengeprĂŒfte Betrachtung von Berechenbarkeitstheorie im Calculus of Inductive Constructions (CIC), der konstruktiven Typtheorie die dem Beweisassistenten Coq zugrunde liegt. Wir entwickeln erst synthetische Berechenbarkeitstheorie, vorbereitet durch die Arbeit von Richman, Bridges und Bauer, wobei alle Funktionen als berechenbar behandelt werden, ohne Notwendigkeit eines Berechnungsmodells. Wir nehmen ein neues, parametrisches Axiom fĂŒr synthetische Berechenbarkeit an und beweisen Resultate wie das Theorem von Rice, das Isomorphismus Theorem von Myhill und die Existenz von Post’s simplen und hypersimplen PrĂ€dikaten ohne Annahme von anderen Axiomen wie Markov’s Prinzip oder Auswahlaxiomen. Als zweiten Schritt fĂŒhren wir Berechnungsmodelle ein. Wir geben einen kompakten Überblick ĂŒber die Definition von verschiedenen Berechnungsmodellen und erklĂ€ren maschinengeprĂŒfte Simulationsbeweise zwischen diesen Modellen, welche einen hohen Konstruktionsaufwand beinhalten. Wir identifizieren einen Begriff von synthetischer Unentscheidbarkeit relativ zu einem fixierten Halteproblem welcher axiomenfreie maschinengeprĂŒfte Unentscheidbarkeitsbeweise erlaubt. Wir erklĂ€ren solche Beweise fĂŒr die historisch grundlegenden Probleme der Berechenbarkeitstheorie, die das Identifizieren von Invarianten die normalerweise in der Literatur ausgelassen werden benötigen und nun die Basis der Coq Library of Undecidability Proofs bilden. Wir identifizieren dann den call-by-value λ-KalkĂŒl L als sweet spot fĂŒr die Programmierung in einem Berechnungsmodell. Wir fĂŒhren ein zertifizierendes Extraktionsframework ein und analysieren ein Axiom welches postuliert dass jede Funktion vom Typ N→N L-berechenbar ist

    Proving Looping and Non-Looping Non-Termination by Finite Automata

    Get PDF
    A new technique is presented to prove non-termination of term rewriting. The basic idea is to find a non-empty regular language of terms that is closed under rewriting and does not contain normal forms. It is automated by representing the language by a tree automaton with a fixed number of states, and expressing the mentioned requirements in a SAT formula. Satisfiability of this formula implies non-termination. Our approach succeeds for many examples where all earlier techniques fail, for instance for the S-rule from combinatory logic

    Equivalence-Checking on Infinite-State Systems: Techniques and Results

    Full text link
    The paper presents a selection of recently developed and/or used techniques for equivalence-checking on infinite-state systems, and an up-to-date overview of existing results (as of September 2004)

    Nominal Unification from a Higher-Order Perspective

    Full text link
    Nominal Logic is a version of first-order logic with equality, name-binding, renaming via name-swapping and freshness of names. Contrarily to higher-order logic, bindable names, called atoms, and instantiable variables are considered as distinct entities. Moreover, atoms are capturable by instantiations, breaking a fundamental principle of lambda-calculus. Despite these differences, nominal unification can be seen from a higher-order perspective. From this view, we show that nominal unification can be reduced to a particular fragment of higher-order unification problems: Higher-Order Pattern Unification. This reduction proves that nominal unification can be decided in quadratic deterministic time, using the linear algorithm for Higher-Order Pattern Unification. We also prove that the translation preserves most generality of unifiers

    The exact hardness of deciding derivational and runtime complexity

    Get PDF
    For any class C of computable total functions satisfying some mild conditions, we prove that the following decision problems are complete for the existential part of the second level of the arithmetical hierarchy: (A) Deciding whether a term rewriting system (TRS for short) has runtime complexity bounded by a function in C. (B) Deciding whether a TRS has derivational complexity bounded by a function in C. In particular, the problems of deciding whether a TRS has polynomially (exponentially) bounded runtime complexity (respectively derivational complexity) are complete for this level of the arithmetical ierarchy. This places deciding polynomial derivational or runtime complexity of TRSs at the same level as deciding nontermination or nonconfluence of TRSs. We proceed to show that the related problem of deciding for a single computable function f whether a TRS has runtime complexity bounded from above by f is complete for the universal part of the first level of the arithmetical hierarchy. We further prove that analysing the implicit complexity of TRSs is even more difficult: The problem of deciding whether a TRS accepts a language of terms accepted by some TRS with runtime complexity bounded by a function in C is complete for the existential part of the third level of the arithmetical hierarchy. All of our results are easily extended to the notion of minimal complexity (where the length of shortest reductions to normal form is considered) and remain valid under any computable reduction strategy. Finally, all results hold both for unrestricted TRSs and for the class of orthogonal TRSs

    On the Termination Problem for Probabilistic Higher-Order Recursive Programs

    Get PDF
    In the last two decades, there has been much progress on model checking of both probabilistic systems and higher-order programs. In spite of the emergence of higher-order probabilistic programming languages, not much has been done to combine those two approaches. In this paper, we initiate a study on the probabilistic higher-order model checking problem, by giving some first theoretical and experimental results. As a first step towards our goal, we introduce PHORS, a probabilistic extension of higher-order recursion schemes (HORS), as a model of probabilistic higher-order programs. The model of PHORS may alternatively be viewed as a higher-order extension of recursive Markov chains. We then investigate the probabilistic termination problem -- or, equivalently, the probabilistic reachability problem. We prove that almost sure termination of order-2 PHORS is undecidable. We also provide a fixpoint characterization of the termination probability of PHORS, and develop a sound (but possibly incomplete) procedure for approximately computing the termination probability. We have implemented the procedure for order-2 PHORSs, and confirmed that the procedure works well through preliminary experiments that are reported at the end of the article
    • 

    corecore