7 research outputs found

    A perspective in theoretical computer science: Commemorative volume for Gift Siromoney

    No full text
    xviii, 438 p. ; 26 cm

    On Induction, Coinduction and Equality in Martin-L\uf6f and Homotopy Type Theory

    Get PDF
    Martin L\uf6f Type Theory, having put computation at the center of logicalreasoning, has been shown to be an effective foundation for proof assistants,with applications both in computer science and constructive mathematics. Oneambition though is for MLTT to also double as a practical general purposeprogramming language. Datatypes in type theory come with an induction orcoinduction principle which gives a precise and concise specification of theirinterface. However, such principles can interfere with how we would like toexpress our programs. In this thesis, we investigate more flexible alternativesto direct uses of the (co)induction principles.As a first contribution, we consider the n-truncation of a type in Homo-topy Type Theory. We derive in HoTT an eliminator into (n+1)-truncatedtypes instead of n-truncated ones, assuming extra conditions on the underlyingfunction.As a second contribution, we improve on type-based criteria for terminationand productivity. By augmenting the types with well-foundedness information,such criteria allow function definitions in a style closer to general recursion.We consider two criteria: guarded types, and sized types.Guarded types introduce a modality ”later” to guard the availability ofrecursive calls provided by a general fixed-point combinator. In Guarded Cu-bical Type Theory we equip the fixed-point combinator with a propositionalequality to its one-step unfolding, instead of a definitional equality that wouldbreak normalization. The notion of path from Cubical Type Theory allows usto do so without losing canonicity or decidability of conversion.Sized types, on the other hand, explicitly index datatypes with size boundson the height or depth of their elements. The sizes however can get in theway of the reasoning principles we expect. Our approach is to introduce newquantifiers for ”irrelevant” size quantification. We present a type theory withparametric quantifiers where irrelevance arises as a “free theorem”. We alsodevelop a conversion checking algorithm for a more specific theory where thenew quantifiers are restricted to sizes.Finally, our third contribution is about the operational semantics of typetheory. For the extensions above we would like to devise a practical conversionchecking algorithm suitable for integration into a proof assistant. We formal-ized the correctness of such an algorithm for a small but challenging corecalculus, proving that conversion is decidable. We expect this development toform a good basis to verify more complex theories.The ideas discussed in this thesis are already influencing the developmentof Agda, a proof assistant based on type theory

    Teaching Foundations of Computation and Deduction Through Literate Functional Programming and Type Theory Formalization

    Get PDF
    We describe experiments in teaching fundamental informatics notions around mathematical structures for formal concepts, and effective algorithms to manipulate them. The major themes of lambda-calculus and type theory served as guides for the effective implementation of functional programming languages and higher-order proof assistants, appropriate for reflecting the theoretical material into effective tools to represent constructively the concepts and formally certify the proofs of their properties. Progressively, a literate programming and proving style replaced informal mathematics in the presentation of the material as executable course notes. The talk will evoke the various stages of (in)completion of the corresponding set of notes along the years, and tell how their elaboration proved to be essential to the discovery of fundamental results

    Conception d'un noyau de vérification de preuves pour le λΠ-calcul modulo

    No full text
    In recent years, the emergence of feature rich and mature interactive proof assistants has enabled large formalization efforts of high-profile conjectures and results previously established only by pen and paper. A medley of incompatible and philosophically diverging logics are at the core of all these proof assistants. Cousineau and Dowek (2007) have proposed the λΠ-calculus modulo as a universal target framework for other front-end proof languages and environments. We explain in this thesis how this particularly simple formalism allows for a small, modular and efficient proof checker upon which the consistency of entire systems can be made to rely upon. Proofs increasingly rely on computation both in the large, as exemplified by the proof of the four colour theorem by Gonthier (2007), and in the small following the SSReflect methodoly and supporting tools. Encoding proofs from other systems in the λΠ-calculus modulo bakes yet more computation into the proof terms. We show how to make the proof checking problem manageable by turning entire proof terms into functional programs and compiling them in one go using off-the-shelf compilers for standard programming languages. We use untyped normalization by evaluation (NbE) as an enabling technology and show how to optimize previous instances of it found in the literature. Through a single change to the interpretation of proof terms, we arrive at a representation of proof terms using higher order abstract syntax (HOAS) allowing for a proof checking algorithm devoid of any explicit typing context for all Pure Type Systems (PTS). We observe that this novel algorithm is a generalization to dependent types of a type checking algorithm found in the HOL proof assistants enabling on-the-fly checking of proofs. We thus arrive at a purely functional system with no explicit state, where all proofs are checked by construction. We formally verify in Coq the correspondence of the type system on higher order terms lying behind this algorithm with respect to the standard typing rules for PTS. This line of work can be seen as connecting two historic strands of proof assistants: LCF and its descendents, where proofs of untyped or simply typed formulae are checked by construction, versus Automath and its descendents, where proofs of dependently typed terms are checked a posteriori. The algorithms presented in this thesis are at the core of a new proof checker called Dedukti and in some cases have been transferred to the more mature platform that is Coq. In joint work with Denes, we show how to extend the untyped NbE algorithm to the syntax and reduction rules of the Calculus of Inductive Constructions (CIC). In joint work with Burel, we generalize previous work by Cousineau and Dowek (2007) on the embedding into the λΠ-calculus modulo of a large class of PTS to inductive types, pattern matching and fixpoint operators.Ces dernières années ont vu l'émergence d'assistants interactifs de preuves riches en fonctionnalités et d'une grande maturité d'implémentation, ce qui a permis l'essor des grosses formalisations de résultats papier et la résolution de conjectures célèbres. Mais autant d'assistants de preuves reposent sur presque autant de logiques comme fondements théoriques. Cousineau et Dowek (2007) proposent le λΠ-calcul modulo comme un cadre universel cible pour tous ces environnement de démonstration. Nous montrons dans cette thèse comment ce formalisme particulièrement simple admet une implémentation d'un vérificateur de taille modeste mais pour autant modulaire et efficace, à la correction de laquelle on peut réduire la cohérence de systèmes tout entiers. Un nombre croissant de preuves dépendent de calculs intensifs comme dans la preuve du théorème des quatre couleurs de Gonthier (2007). Les méthodologies telles que SSReflect et les outils attenants privilégient les preuves contenant de nombreux petits calculs plutôt que les preuves purement déductives. L'encodage de preuves provenant d'autres systèmes dans le λΠ-calcul modulo introduit d'autres calculs encore. Nous montrons comment gérer la taille de ces calculs en interprétant les preuves tout entières comme des programmes fonctionnels, que l'on peut compiler vers du code machine à l'aide de compilateurs standards et clé-en-main. Nous employons pour cela une variante non typée de la normalisation par évaluation (NbE), et montrons comment optimiser de précédentes formulation de celle-ci. Au travers d'une seule petite modification à l'interprétation des termes de preuves, nous arrivons aussi à une représentation des preuves en syntaxe abstraite d'ordre supérieur (HOAS), qui admet naturellement un algorithme de typage sans aucun contexte de typage explicite. Nous généralisons cet algorithme à tous les systèmes de types purs (PTS). Nous observons que cet algorithme est une extension à un cadre avec types dépendants de l'algorithme de typage des assistants de preuves de la famille HOL. Cette observation nous amène à développer une architecture à la LCF pour une large classe de PTS, c'est à dire une architecture où tous les termes de preuves sont corrects par construction, a priori donc, et n'ont ainsi pas besoin d'être vérifié a posteriori. Nous prouvons formellement en Coq un théorème de correspondance entre les système de types sans contexte et leur pendant standard avec contexte explicite. Ces travaux jettent un pont entre deux lignées historiques d'assistants de preuves : la lignée issue de LCF à qui nous empruntons l'architecture du noyau, et celle issue de Automath, dont nous héritons la notion de types dépendants. Les algorithmes présentés dans cette thèse sont au coeur d'un nouveau vérificateur de preuves appelé Dedukti et ont aussi été transférés vers un système plus mature : Coq. En collaboration avec Dénès, nous montrons comment étendre la NbE non typée pour gérer la syntaxe et les règles de réduction du calcul des constructions inductives (CIC). En collaboration avec Burel, nous généralisons des travaux précédents de Cousineau et Dowek (2007) sur l'encodage dans le λΠ-calcul modulo d'une large classe de PTS à des PTS avec types inductifs, motifs de filtrage et opérateurs de point fixe

    The Coq Proof Assistant : Reference Manual : Version 7.2

    Get PDF
    Coq is a proof assistant based on a higher-order logic. Coq allows to handle calculus mathematical assertions and to check mechanically proofs of these assertions. It helps to find formal proofs, and allows extraction of a certified program from the constructive proof of its formal specification. This document is the reference manual for the version V7.2 of Coq which is available from http://coq.inria.fr

    The Coq Proof Assistant : Reference Manual : Version 7.2

    Get PDF
    Coq is a proof assistant based on a higher-order logic. Coq allows to handle calculus mathematical assertions and to check mechanically proofs of these assertions. It helps to find formal proofs, and allows extraction of a certified program from the constructive proof of its formal specification. This document is the reference manual for the version V7.2 of Coq which is available from http://coq.inria.fr

    Some Lambda Calculus and Type Theory Formalized

    Get PDF
    "This paper is about our hobby." That is the first sentence of [MP93], the first report on our formal development of lambda calculus and type theory, written in autumn 1992. We have continued to pursue this hobby on and off ever since, and have developed a substantial body of formal knowledge, including Church-Rosser and standardizationtheorems for beta reduction, and the basic theory ofPure Type Systems (PTS) leading to the strengthening theorem and type checking algorithms for PTS. Some of this work is reported in [MP93, vBJMP94, Pol94b, Pol95]. In the present paper we survey this work, including some new proofs, and point out what we feel has been learned about the general issues of formalizing mathematics. On the technical side, we describe an abstract, and simplified, proof of standardization for beta reduction, not previously published, that doesnot mention redex positions or residuals. On the general issues, we emphasize the search for formal definitions that are convenient for formal proof and convincingly represent the intended informal concepts. The LEGO Proof Development System [LP92] was used to check the work in an implementation of the Extended Calculus of Constructions(ECC) with inductive types [Luo94]. LEGO is a refinement styleproof checker, publicly available by ftp and WWW, with a User's Manual [LP92] and a large collection of examples. Section 1.3 contains information on accessing the formal development described in this paper. Other interesting examples formalized in LEGO include program specification and data refinement [Luo91], strong normalization of System F [Alt93], synthetic domain theory [Reu95, Reu96], and operational semantics for imperative programs [Sch97]
    corecore