4,480 research outputs found

    Strong Normalization for HA + EM1 by Non-Deterministic Choice

    Full text link
    We study the strong normalization of a new Curry-Howard correspondence for HA + EM1, constructive Heyting Arithmetic with the excluded middle on Sigma01-formulas. The proof-term language of HA + EM1 consists in the lambda calculus plus an operator ||_a which represents, from the viewpoint of programming, an exception operator with a delimited scope, and from the viewpoint of logic, a restricted version of the excluded middle. We give a strong normalization proof for the system based on a technique of "non-deterministic immersion".Comment: In Proceedings COS 2013, arXiv:1309.092

    Dimension Extractors and Optimal Decompression

    Full text link
    A *dimension extractor* is an algorithm designed to increase the effective dimension -- i.e., the amount of computational randomness -- of an infinite binary sequence, in order to turn a "partially random" sequence into a "more random" sequence. Extractors are exhibited for various effective dimensions, including constructive, computable, space-bounded, time-bounded, and finite-state dimension. Using similar techniques, the Kucera-Gacs theorem is examined from the perspective of decompression, by showing that every infinite sequence S is Turing reducible to a Martin-Loef random sequence R such that the asymptotic number of bits of R needed to compute n bits of S, divided by n, is precisely the constructive dimension of S, which is shown to be the optimal ratio of query bits to computed bits achievable with Turing reductions. The extractors and decompressors that are developed lead directly to new characterizations of some effective dimensions in terms of optimal decompression by Turing reductions.Comment: This report was combined with a different conference paper "Every Sequence is Decompressible from a Random One" (cs.IT/0511074, at http://dx.doi.org/10.1007/11780342_17), and both titles were changed, with the conference paper incorporated as section 5 of this new combined paper. The combined paper was accepted to the journal Theory of Computing Systems, as part of a special issue of invited papers from the second conference on Computability in Europe, 200

    Analysis of methods for extraction of programs from non-constructive proofs

    Get PDF
    The present thesis compares two computational interpretations of non-constructive proofs: refined A-translation and Gödel's functional "Dialectica" interpretation. The behaviour of the extraction methods is evaluated in the light of several case studies, where the resulting programs are analysed and compared. It is argued that the two interpretations correspond to specific backtracking implementations and that programs obtained via the refined A-translation tend to be simpler, faster and more readable than programs obtained via Gödel's interpretation. Three layers of optimisation are suggested in order to produce faster and more readable programs. First, it is shown that syntactic repetition of subterms can be reduced by using let-constructions instead of meta substitutions abd thus obtaining a near linear size bound of extracted terms. The second improvement allows declaring syntactically computational parts of the proof as irrelevant and that this can be used to remove redundant parameters, possibly improving the efficiency of the program. Finally, a special case of induction is identified, for which a more efficient recursive extracted term can be defined. It is shown the outcome of case distinctions can be memoised, which can result in exponential improvement of the average time complexity of the extracted program

    Mathematische Logik

    Get PDF
    [no abstract available

    The Computability-Theoretic Content of Emergence

    Get PDF
    In dealing with emergent phenomena, a common task is to identify useful descriptions of them in terms of the underlying atomic processes, and to extract enough computational content from these descriptions to enable predictions to be made. Generally, the underlying atomic processes are quite well understood, and (with important exceptions) captured by mathematics from which it is relatively easy to extract algorithmic con- tent. A widespread view is that the difficulty in describing transitions from algorithmic activity to the emergence associated with chaotic situations is a simple case of complexity outstripping computational resources and human ingenuity. Or, on the other hand, that phenomena transcending the standard Turing model of computation, if they exist, must necessarily lie outside the domain of classical computability theory. In this article we suggest that much of the current confusion arises from conceptual gaps and the lack of a suitably fundamental model within which to situate emergence. We examine the potential for placing emer- gent relations in a familiar context based on Turing's 1939 model for interactive computation over structures described in terms of reals. The explanatory power of this model is explored, formalising informal descrip- tions in terms of mathematical definability and invariance, and relating a range of basic scientific puzzles to results and intractable problems in computability theory

    Computability in constructive type theory

    Get PDF
    We give a formalised and machine-checked account of computability theory in the Calculus of Inductive Constructions (CIC), the constructive type theory underlying the Coq proof assistant. We first develop synthetic computability theory, pioneered by Richman, Bridges, and Bauer, where one treats all functions as computable, eliminating the need for a model of computation. We assume a novel parametric axiom for synthetic computability and give proofs of results like Rice’s theorem, the Myhill isomorphism theorem, and the existence of Post’s simple and hypersimple predicates relying on no other axioms such as Markov’s principle or choice axioms. As a second step, we introduce models of computation. We give a concise overview of definitions of various standard models and contribute machine-checked simulation proofs, posing a non-trivial engineering effort. We identify a notion of synthetic undecidability relative to a fixed halting problem, allowing axiom-free machine-checked proofs of undecidability. We contribute such undecidability proofs for the historical foundational problems of computability theory which require the identification of invariants left out in the literature and now form the basis of the Coq Library of Undecidability Proofs. We then identify the weak call-by-value λ-calculus L as sweet spot for programming in a model of computation. We introduce a certifying extraction framework and analyse an axiom stating that every function of type ℕ → ℕ is L-computable.Wir behandeln eine formalisierte und maschinengeprüfte Betrachtung von Berechenbarkeitstheorie im Calculus of Inductive Constructions (CIC), der konstruktiven Typtheorie die dem Beweisassistenten Coq zugrunde liegt. Wir entwickeln erst synthetische Berechenbarkeitstheorie, vorbereitet durch die Arbeit von Richman, Bridges und Bauer, wobei alle Funktionen als berechenbar behandelt werden, ohne Notwendigkeit eines Berechnungsmodells. Wir nehmen ein neues, parametrisches Axiom für synthetische Berechenbarkeit an und beweisen Resultate wie das Theorem von Rice, das Isomorphismus Theorem von Myhill und die Existenz von Post’s simplen und hypersimplen Prädikaten ohne Annahme von anderen Axiomen wie Markov’s Prinzip oder Auswahlaxiomen. Als zweiten Schritt führen wir Berechnungsmodelle ein. Wir geben einen kompakten Überblick über die Definition von verschiedenen Berechnungsmodellen und erklären maschinengeprüfte Simulationsbeweise zwischen diesen Modellen, welche einen hohen Konstruktionsaufwand beinhalten. Wir identifizieren einen Begriff von synthetischer Unentscheidbarkeit relativ zu einem fixierten Halteproblem welcher axiomenfreie maschinengeprüfte Unentscheidbarkeitsbeweise erlaubt. Wir erklären solche Beweise für die historisch grundlegenden Probleme der Berechenbarkeitstheorie, die das Identifizieren von Invarianten die normalerweise in der Literatur ausgelassen werden benötigen und nun die Basis der Coq Library of Undecidability Proofs bilden. Wir identifizieren dann den call-by-value λ-Kalkül L als sweet spot für die Programmierung in einem Berechnungsmodell. Wir führen ein zertifizierendes Extraktionsframework ein und analysieren ein Axiom welches postuliert dass jede Funktion vom Typ N→N L-berechenbar ist

    The lambda-mu-T-calculus

    Get PDF
    Calculi with control operators have been studied as extensions of simple type theory. Real programming languages contain datatypes, so to really understand control operators, one should also include these in the calculus. As a first step in that direction, we introduce lambda-mu-T, a combination of Parigot's lambda-mu-calculus and G\"odel's T, to extend a calculus with control operators with a datatype of natural numbers with a primitive recursor. We consider the problem of confluence on raw terms, and that of strong normalization for the well-typed terms. Observing some problems with extending the proofs of Baba at al. and Parigot's original confluence proof, we provide new, and improved, proofs of confluence (by complete developments) and strong normalization (by reducibility and a postponement argument) for our system. We conclude with some remarks about extensions, choices, and prospects for an improved presentation
    corecore