7,762 research outputs found

    A Categorical Normalization Proof for the Modal Lambda-Calculus

    Full text link
    We investigate a simply typed modal λ\lambda-calculus, λ\lambda^{\to\square}, due to Pfenning, Wong and Davies, where we define a well-typed term with respect to a context stack that captures the possible world semantics in a syntactic way. It provides logical foundation for multi-staged meta-programming. Our main contribution in this paper is a normalization by evaluation (NbE) algorithm for λ\lambda^{\to\square} which we prove sound and complete. The NbE algorithm is a moderate extension to the standard presheaf model of simply typed λ\lambda-calculus. However, central to the model construction and the NbE algorithm is the observation of Kripke-style substitutions on context stacks which brings together two previously separate concepts, structural modal transformations on context stacks and substitutions for individual assumptions. Moreover, Kripke-style substitutions allow us to give a formulation for contextual types, which can represent open code in a meta-programming setting. Our work lays the foundation for extending the logical foundation by Pfenning, Wong, and Davies towards building a practical, dependently typed foundation for meta-programming

    Contrasting Compile-Time Meta-Programming in Metalua and Converge

    Get PDF
    Powerful, safe macro systems allow programs to be programatically constructed by the user at compile-time. Such systems have traditionally been largely confined to LISP-like languages and their successors. In this paper we describe and compare two modern, dynamically typed languages Converge and Metalua, which both have macro-like systems. We show how, in different ways, they build upon traditional macro systems to explore new ways of constructing programs

    Refinement Types for Logical Frameworks and Their Interpretation as Proof Irrelevance

    Full text link
    Refinement types sharpen systems of simple and dependent types by offering expressive means to more precisely classify well-typed terms. We present a system of refinement types for LF in the style of recent formulations where only canonical forms are well-typed. Both the usual LF rules and the rules for type refinements are bidirectional, leading to a straightforward proof of decidability of typechecking even in the presence of intersection types. Because we insist on canonical forms, structural rules for subtyping can now be derived rather than being assumed as primitive. We illustrate the expressive power of our system with examples and validate its design by demonstrating a precise correspondence with traditional presentations of subtyping. Proof irrelevance provides a mechanism for selectively hiding the identities of terms in type theories. We show that LF refinement types can be interpreted as predicates using proof irrelevance, establishing a uniform relationship between two previously studied concepts in type theory. The interpretation and its correctness proof are surprisingly complex, lending support to the claim that refinement types are a fundamental construct rather than just a convenient surface syntax for certain uses of proof irrelevance

    Explicit Substitutions for Contextual Type Theory

    Get PDF
    In this paper, we present an explicit substitution calculus which distinguishes between ordinary bound variables and meta-variables. Its typing discipline is derived from contextual modal type theory. We first present a dependently typed lambda calculus with explicit substitutions for ordinary variables and explicit meta-substitutions for meta-variables. We then present a weak head normalization procedure which performs both substitutions lazily and in a single pass thereby combining substitution walks for the two different classes of variables. Finally, we describe a bidirectional type checking algorithm which uses weak head normalization and prove soundness.Comment: In Proceedings LFMTP 2010, arXiv:1009.218

    On Irrelevance and Algorithmic Equality in Predicative Type Theory

    Full text link
    Dependently typed programs contain an excessive amount of static terms which are necessary to please the type checker but irrelevant for computation. To separate static and dynamic code, several static analyses and type systems have been put forward. We consider Pfenning's type theory with irrelevant quantification which is compatible with a type-based notion of equality that respects eta-laws. We extend Pfenning's theory to universes and large eliminations and develop its meta-theory. Subject reduction, normalization and consistency are obtained by a Kripke model over the typed equality judgement. Finally, a type-directed equality algorithm is described whose completeness is proven by a second Kripke model.Comment: 36 pages, superseds the FoSSaCS 2011 paper of the first author, titled "Irrelevance in Type Theory with a Heterogeneous Equality Judgement
    corecore