436 research outputs found

    A Mechanised Proof of the Time Invariance Thesis for the Weak Call-By-Value ?-Calculus

    Get PDF
    The weak call-by-value ?-calculus ?and Turing machines can simulate each other with a polynomial overhead in time. This time invariance thesis for L, where the number of ?-reductions of a computation is taken as its time complexity, is the culmination of a 25-years line of research, combining work by Blelloch, Greiner, Dal Lago, Martini, Accattoli, Forster, Kunze, Roth, and Smolka. The present paper presents a mechanised proof of the time invariance thesis for L, constituting the first mechanised equivalence proof between two standard models of computation covering time complexity. The mechanisation builds on an existing framework for the extraction of Coq functions to L and contributes a novel Hoare logic framework for the verification of Turing machines. The mechanised proof of the time invariance thesis establishes ?as model for future developments of mechanised computational complexity theory regarding time. It can also be seen as a non-trivial but elementary case study of time-complexity-preserving translations between a functional language and a sequential machine model. As a by-product, we obtain a mechanised many-one equivalence proof of the halting problems for ?and Turing machines, which we contribute to the Coq Library of Undecidability Proofs

    Applying Formal Methods to Networking: Theory, Techniques and Applications

    Full text link
    Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet which began as a research experiment was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, especially for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification, and an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design---especially, the software defined networking (SDN) paradigm---offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial

    Computability in constructive type theory

    Get PDF
    We give a formalised and machine-checked account of computability theory in the Calculus of Inductive Constructions (CIC), the constructive type theory underlying the Coq proof assistant. We first develop synthetic computability theory, pioneered by Richman, Bridges, and Bauer, where one treats all functions as computable, eliminating the need for a model of computation. We assume a novel parametric axiom for synthetic computability and give proofs of results like Rice’s theorem, the Myhill isomorphism theorem, and the existence of Post’s simple and hypersimple predicates relying on no other axioms such as Markov’s principle or choice axioms. As a second step, we introduce models of computation. We give a concise overview of definitions of various standard models and contribute machine-checked simulation proofs, posing a non-trivial engineering effort. We identify a notion of synthetic undecidability relative to a fixed halting problem, allowing axiom-free machine-checked proofs of undecidability. We contribute such undecidability proofs for the historical foundational problems of computability theory which require the identification of invariants left out in the literature and now form the basis of the Coq Library of Undecidability Proofs. We then identify the weak call-by-value λ-calculus L as sweet spot for programming in a model of computation. We introduce a certifying extraction framework and analyse an axiom stating that every function of type ℕ → ℕ is L-computable.Wir behandeln eine formalisierte und maschinengeprüfte Betrachtung von Berechenbarkeitstheorie im Calculus of Inductive Constructions (CIC), der konstruktiven Typtheorie die dem Beweisassistenten Coq zugrunde liegt. Wir entwickeln erst synthetische Berechenbarkeitstheorie, vorbereitet durch die Arbeit von Richman, Bridges und Bauer, wobei alle Funktionen als berechenbar behandelt werden, ohne Notwendigkeit eines Berechnungsmodells. Wir nehmen ein neues, parametrisches Axiom für synthetische Berechenbarkeit an und beweisen Resultate wie das Theorem von Rice, das Isomorphismus Theorem von Myhill und die Existenz von Post’s simplen und hypersimplen Prädikaten ohne Annahme von anderen Axiomen wie Markov’s Prinzip oder Auswahlaxiomen. Als zweiten Schritt führen wir Berechnungsmodelle ein. Wir geben einen kompakten Überblick über die Definition von verschiedenen Berechnungsmodellen und erklären maschinengeprüfte Simulationsbeweise zwischen diesen Modellen, welche einen hohen Konstruktionsaufwand beinhalten. Wir identifizieren einen Begriff von synthetischer Unentscheidbarkeit relativ zu einem fixierten Halteproblem welcher axiomenfreie maschinengeprüfte Unentscheidbarkeitsbeweise erlaubt. Wir erklären solche Beweise für die historisch grundlegenden Probleme der Berechenbarkeitstheorie, die das Identifizieren von Invarianten die normalerweise in der Literatur ausgelassen werden benötigen und nun die Basis der Coq Library of Undecidability Proofs bilden. Wir identifizieren dann den call-by-value λ-Kalkül L als sweet spot für die Programmierung in einem Berechnungsmodell. Wir führen ein zertifizierendes Extraktionsframework ein und analysieren ein Axiom welches postuliert dass jede Funktion vom Typ N→N L-berechenbar ist

    Mechanising Complexity Theory: The Cook-Levin Theorem in Coq

    Get PDF

    Constructive Many-One Reduction from the Halting Problem to Semi-Unification

    Get PDF
    Semi-unification is the combination of first-order unification and first-order matching. The undecidability of semi-unification has been proven by Kfoury, Tiuryn, and Urzyczyn in the 1990s by Turing reduction from Turing machine immortality (existence of a diverging configuration). The particular Turing reduction is intricate, uses non-computational principles, and involves various intermediate models of computation. The present work gives a constructive many-one reduction from the Turing machine halting problem to semi-unification. This establishes RE-completeness of semi-unification under many-one reductions. Computability of the reduction function, constructivity of the argument, and correctness of the argument is witnessed by an axiom-free mechanization in the Coq proof assistant. Arguably, this serves as comprehensive, precise, and surveyable evidence for the result at hand. The mechanization is incorporated into the existing, well-maintained Coq library of undecidability proofs. Notably, a variant of Hooper's argument for the undecidability of Turing machine immortality is part of the mechanization.Comment: CSL 2022 - LMCS special issu
    • …
    corecore