23 research outputs found

    Elaboration in Dependent Type Theory

    Full text link
    To be usable in practice, interactive theorem provers need to provide convenient and efficient means of writing expressions, definitions, and proofs. This involves inferring information that is often left implicit in an ordinary mathematical text, and resolving ambiguities in mathematical expressions. We refer to the process of passing from a quasi-formal and partially-specified expression to a completely precise formal one as elaboration. We describe an elaboration algorithm for dependent type theory that has been implemented in the Lean theorem prover. Lean's elaborator supports higher-order unification, type class inference, ad hoc overloading, insertion of coercions, the use of tactics, and the computational reduction of terms. The interactions between these components are subtle and complex, and the elaboration algorithm has been carefully designed to balance efficiency and usability. We describe the central design goals, and the means by which they are achieved

    Generalized Universe Hierarchies and First-Class Universe Levels

    Get PDF
    In type theories, universe hierarchies are commonly used to increase the expressive power of the theory while avoiding inconsistencies arising from size issues. There are numerous ways to specify universe hierarchies, and theories may differ in details of cumulativity, choice of universe levels, specification of type formers and eliminators, and available internal operations on levels. In the current work, we aim to provide a framework which covers a large part of the design space. First, we develop syntax and semantics for cumulative universe hierarchies, where levels may come from any set equipped with a transitive well-founded ordering. In the semantics, we show that induction-recursion can be used to model transfinite hierarchies, and also support lifting operations on type codes which strictly preserve type formers. Then, we consider a setup where universe levels are first-class types and subject to arbitrary internal reasoning. This generalizes the bounded polymorphism features of Coq and at the same time the internal level computations in Agda

    A Type Checker for a Logical Framework with Union and Intersection Types

    Get PDF
    We present the syntax, semantics, and typing rules of Bull, a prototype theorem prover based on the Delta-Framework, i.e. a fully-typed lambda-calculus decorated with union and intersection types, as described in previous papers by the authors. Bull also implements a subtyping algorithm for the Type Theory Xi of Barbanera-Dezani-de'Liguoro. Bull has a command-line interface where the user can declare axioms, terms, and perform computations and some basic terminal-style features like error pretty-printing, subexpressions highlighting, and file loading. Moreover, it can typecheck a proof or normalize it. These terms can be incomplete, therefore the typechecking algorithm uses unification to try to construct the missing subterms. Bull uses the syntax of Berardi's Pure Type Systems to improve the compactness and the modularity of the kernel. Abstract and concrete syntax are mostly aligned and similar to the concrete syntax of Coq. Bull uses a higher-order unification algorithm for terms, while typechecking and partial type inference are done by a bidirectional refinement algorithm, similar to the one found in Matita and Beluga. The refinement can be split into two parts: the essence refinement and the typing refinement. Binders are implemented using commonly-used de Bruijn indices. We have defined a concrete language syntax that will allow the user to write Delta-terms. We have defined the reduction rules and an evaluator. We have implemented from scratch a refiner which does partial typechecking and type reconstruction. We have experimented Bull with classical examples of the intersection and union literature, such as the ones formalized by Pfenning with his Refinement Types in LF. We hope that this research vein could be useful to experiment, in a proof theoretical setting, forms of polymorphism alternatives to Girard's parametric one

    A Type Checker for a Logical Framework with Union and Intersection Types

    Get PDF
    International audienceWe present the syntax, semantics, typing, subtyping, unification, refinement, and REPL of Bull, a prototype theorem prover based on the ∆-Framework, i.e. a fully-typed Logical Framework à la Edinburgh LF decorated with union and intersection types, as described in previous papers by the authors. Bull also implements a subtyping algorithm for the Type Theory Ξ of Barbanera-Dezani-de'Liguoro. Bull has a command-line interface where the user can declare axioms, terms, and perform computations and some basic terminal-style features like error pretty-printing, subexpressions highlighting, and file loading. Moreover, it can typecheck a proof or normalize it. These terms can be incomplete, therefore the typechecking algorithm uses unification to try to construct the missing subterms. Bull uses the syntax of Berardi's Pure Type Systems to improve the compactness and the modularity of the kernel. Abstract and concrete syntax are mostly aligned and similar to the concrete syntax of Coq. Bull uses a higher-order unification algorithm for terms, while typechecking and partial type inference are done by a bidirectional refinement algorithm, similar to the one found in Matita and Beluga. The refinement can be split into two parts: the essence refinement and the typing refinement. Binders are implemented using commonly-used de Bruijn indices. We have defined a concrete language syntax that will allow user to write ∆-terms. We have defined the reduction rules and an evaluator. We have implemented from scratch a refiner which does partial typechecking and type reconstruction. We have experimented Bull with classical examples of the intersection and union literature, such as the ones formalized by Pfenning with his Refinement Types in LF and by Pierce. We hope that this research vein could be useful to experiment, in a proof theoretical setting, forms of polymorphism alternatives to Girard's parametric one

    A consistent foundation for Isabelle/HOL

    Get PDF
    The interactive theorem prover Isabelle/HOL is based on the well understood higher-order logic (HOL), which is widely believed to be consistent (and provably consistent in set theory by a standard semantic argument). However, Isabelle/HOL brings its own personal touch to HOL: overloaded constant definitions, used to provide the users with Haskell-like type classes. These features are a delight for the users, but unfortunately are not easy to get right as an extension of HOL—they have a history of inconsistent behavior. It has been an open question under which criteria overloaded constant definitions and type definitions can be combined together while still guaranteeing consistency. This paper presents a solution to this problem: non-overlapping definitions and termination of the definition-dependency relation (tracked not only through constants but also through types) ensures relative consistency of Isabelle/HOL

    Practical Heterogeneous Unification for Dependent Type Checking

    Get PDF
    Dependent types can specify in detail which inputs to a program are allowed, and how the properties of its output depend on the inputs. A program called the type checker assesses whether a program has a given type, thus detecting situations where the implementation of a program potentially differs from its intended behaviour. When using dependent types, the inputs to a program often occur in the types of other inputs or in the type of the output. The user may omit some of these redundant inputs when calling the program, expecting the type-checker to infer those subterms automatically. Some type-checkers restrict the inference of missing subterms to those cases where there is a provably unique solution. This makes the process more predictable, but also limits the situations in which the omitted terms can be inferred; specially when considering that whether a unique solution exists is in general an undecidable problem. This restriction can be made less limiting by giving flexibility to the type-checker regarding the order in which the missing subterms are inferred. The type-checker can then use the information gained by filling in any one subterm in order to infer others, until the whole program has been type-checked. However, this flexibility may in some cases lead to ill-typed subterms being inferred, breaking internal invariants of the type-checker and causing it to crash or loop. The type checker could mitigate this by consistently rechecking the type of each inferred subterm, but this might incur a performance penalty.\ua0An approach by Gundry and McBride (2012) called twin types has the potential to afford the desired flexibility while preserving well-typedness invariants. However, this method had not yet been tested in a practical setting. In this thesis we streamline the method of twin types in order to ease its practical implementation, justify the correctness of our modifications, and then implement the result in an established dependently-typed language called Agda. We show that our implementation resolves certain existing bugs in Agda while still allowing a wide range of examples to be type-checked, and achieves this without heavily impacting performance

    Erasure in dependently typed programming

    Get PDF
    It is important to reduce the cost of correctness in programming. Dependent types and related techniques, such as type-driven programming, offer ways to do so. Some parts of dependently typed programs constitute evidence of their typecorrectness and, once checked, are unnecessary for execution. These parts can easily become asymptotically larger than the remaining runtime-useful computation, which can cause linear-time algorithms run in exponential time, or worse. It would be unnacceptable, and contradict our goal of reducing the cost of correctness, to make programs run slower by only describing them more precisely. Current systems cannot erase such computation satisfactorily. By modelling erasure indirectly through type universes or irrelevance, they impose the limitations of these means to erasure. Some useless computation then cannot be erased and idiomatic programs remain asymptotically sub-optimal. This dissertation explains why we need erasure, that it is different from other concepts like irrelevance, and proposes two ways of erasing non-computational data. One is an untyped flow-based useless variable elimination, adapted for dependently typed languages, currently implemented in the Idris 1 compiler. The other is the main contribution of the dissertation: a dependently typed core calculus with erasure annotations, full dependent pattern matching, and an algorithm that infers erasure annotations from unannotated (or partially annotated) programs. I show that erasure in well-typed programs is sound in that it commutes with single-step reduction. Assuming the Church-Rosser property of reduction, I show that properties such as Subject Reduction hold, which extends the soundness result to multi-step reduction. I also show that the presented erasure inference is sound and complete with respect to the typing rules; that this approach can be extended with various forms of erasure polymorphism; that it works well with monadic I/O and foreign functions; and that it is effective in that it not only removes the runtime overhead caused by dependent typing in the presented examples, but can also shorten compilation times."This work was supported by the University of St Andrews (School of Computer Science)." -- Acknowledgement

    HERMIT: Mechanized Reasoning during Compilation in the Glasgow Haskell Compiler

    Get PDF
    It is difficult to write programs which are both correct and fast. A promising approach, functional programming, is based on the idea of using pure, mathematical functions to construct programs. With effort, it is possible to establish a connection between a specification written in a functional language, which has been proven correct, and a fast implementation, via program transformation. When practiced in the functional programming community, this style of reasoning is still typically performed by hand, by either modifying the source code or using pen-and-paper. Unfortunately, performing such semi-formal reasoning by directly modifying the source code often obfuscates the program, and pen-and-paper reasoning becomes outdated as the program changes over time. Even so, this semi-formal reasoning prevails because formal reasoning is time-consuming, and requires considerable expertise. Formal reasoning tools often only work for a subset of the target language, or require programs to be implemented in a custom language for reasoning. This dissertation investigates a solution, called HERMIT, which mechanizes reasoning during compilation. HERMIT can be used to prove properties about programs written in the Haskell functional programming language, or transform them to improve their performance. Reasoning in HERMIT proceeds in a style familiar to practitioners of pen-and-paper reasoning, and mechanization allows these techniques to be applied to real-world programs with greater confidence. HERMIT can also re-check recorded reasoning steps on subsequent compilations, enforcing a connection with the program as the program is developed. HERMIT is the first system capable of directly reasoning about the full Haskell language. The design and implementation of HERMIT, motivated both by typical reasoning tasks and HERMIT's place in the Haskell ecosystem, is presented in detail. Three case studies investigate HERMIT's capability to reason in practice. These case studies demonstrate that semi-formal reasoning with HERMIT lowers the barrier to writing programs which are both correct and fast

    A machine-checked constructive metatheory of computation tree logic

    Get PDF
    This thesis presents a machine-checked constructive metatheory of computation tree logic (CTL) and its sublogics K and K* based on results from the literature. We consider models, Hilbert systems, and history-based Gentzen systems and show that for every logic and every formula s the following statements are decidable and equivalent: s is true in all models, s is provable in the Hilbert system, and s is provable in the Gentzen system. We base our proofs on pruning systems constructing finite models for satisfiable formulas and abstract refutations for unsatisfiable formulas. The pruning systems are devised such that abstract refutations can be translated to derivations in the Hilbert system and the Gentzen system, thus establishing completeness of both systems with a single model construction. All results of this thesis are formalized and machine-checked with the Coq interactive theorem prover. Given the level of detail involved and the informal presentation in much of the original work, the gap between the original paper proofs and constructive machine-checkable proofs is considerable. The mathematical proofs presented in this thesis provide for elegant formalizations and often differ significantly from the proofs in the literature.Diese Dissertation beschreibt eine maschinell verifizierte konstruktive Metatheorie von computation tree logic (CTL) und deren Teillogiken K und K*. Wir betrachten Modelle, Hilbert-Kalküle und History-basierte Gentzen-Kalküle und zeigen, für jede betrachtete Logik und jede Formel s, Entscheidbarkeit und Äquivalenz der folgenden Aussagen: s gilt in allen Modellen, s ist im Hilbert-Kalkül ableitbar und s ist im Gentzen-Kalkül ableitbar. Die Beweise bauen auf Pruningsystemen auf, welche für erfüllbare Formeln endliche Modelle und für unerfüllbare Formeln abstrakte Widerlegungen konstruieren. Die Pruningsysteme sind so konstruiert, dass abstrakte Widerlegungen zu Widerlegungen sowohl im Hilbert- als auch im Gentzen-Kalkül übersetzt werden können. Dadurch wird es möglich, die Vollständigkeit beider Systeme mit nur einer Modellkonstruktion zu zeigen. Alle Ergebnisse dieser Dissertation sind formalisiert und maschinell verifiziert mit Hilfe des Beweisassistenten Coq. In Anbetracht der Fülle an Details und der informellen Beweisführung in großen Teilen der Originalliteratur, erfordert dies teilweise tiefgreifende Veränderungen an den Beweisen aus der Literatur. Die Beweise in der vorliegenden Arbeit sind so aufgebaut, dass sie zu eleganten Formalisierungen führen
    corecore