8 research outputs found

    Hereditary Substitutions for Simple Types, Formalized

    Get PDF
    International audienceWe analyze a normalization function for the simply typed lambda-calculus based on hereditary substitutions, a technique developed by Pfenning et al. The normalizer is implemented in Agda, a total language where all programs terminate. It requires no termination proof since it is structurally recursive which is recognized by Agda's termination checker. Using Agda as an interactive theorem prover we establish that our normalization function precisely identifies beta-eta-equivalent terms and hence can be used to decide beta-eta-equality. An interesting feature of this approach is that it is clear from the construction that beta-\eta-equality is primitive recursive

    Normalization by Evaluation in the Delay Monad: A Case Study for Coinduction via Copatterns and Sized Types

    Get PDF
    In this paper, we present an Agda formalization of a normalizer for simply-typed lambda terms. The normalizer consists of two coinductively defined functions in the delay monad: One is a standard evaluator of lambda terms to closures, the other a type-directed reifier from values to eta-long beta-normal forms. Their composition, normalization-by-evaluation, is shown to be a total function a posteriori, using a standard logical-relations argument. The successful formalization serves as a proof-of-concept for coinductive programming and reasoning using sized types and copatterns, a new and presently experimental feature of Agda.Comment: In Proceedings MSFP 2014, arXiv:1406.153

    New Equations for Neutral Terms: A Sound and Complete Decision Procedure, Formalized

    Get PDF
    The definitional equality of an intensional type theory is its test of type compatibility. Today's systems rely on ordinary evaluation semantics to compare expressions in types, frustrating users with type errors arising when evaluation fails to identify two `obviously' equal terms. If only the machine could decide a richer theory! We propose a way to decide theories which supplement evaluation with `ν\nu-rules', rearranging the neutral parts of normal forms, and report a successful initial experiment. We study a simple -calculus with primitive fold, map and append operations on lists and develop in Agda a sound and complete decision procedure for an equational theory enriched with monoid, functor and fusion laws

    The language of Stratified Sets is confluent and strongly normalising

    Get PDF
    We study the properties of the language of Stratified Sets (first-order logic with ∈\in and a stratification condition) as used in TST, TZT, and (with stratifiability instead of stratification) in Quine's NF. We find that the syntax forms a nominal algebra for substitution and that stratification and stratifiability imply confluence and strong normalisation under rewrites corresponding naturally to β\beta-conversion.Comment: arXiv admin note: text overlap with arXiv:1406.406

    Everybody's got to be somewhere

    Get PDF
    The key to any nameless representation of syntax is how it indicates the variables we choose to use and thus, implicitly, those we discard. Standard de Bruijn representations delay discarding maximally till the leaves of terms where one is chosen from the variables in scope at the expense of the rest. Consequently, introducing new but unused variables requires term traversal. This paper introduces a nameless 'co-de-Bruijn' representation which makes the opposite canonical choice, delaying discarding minimally, as near as possible to the root. It is literate Agda: dependent types make it a practical joy to express and be driven by strong intrinsic invariants which ensure that scope is aggressively whittled down to just the support of each subterm, in which every remaining variable occurs somewhere. The construction is generic, delivering a universe of syntaxes with higher-order metavariables, for which the appropriate notion of substitution is hereditary. The implementation of simultaneous substitution exploits tight scope control to avoid busywork and shift terms without traversal. Surprisingly, it is also intrinsically terminating, by structural recursion alone

    A Theory of Higher-Order Subtyping with Type Intervals (Extended Version)

    Full text link
    The calculus of Dependent Object Types (DOT) has enabled a more principled and robust implementation of Scala, but its support for type-level computation has proven insufficient. As a remedy, we propose F..ωF^\omega_{..}, a rigorous theoretical foundation for Scala's higher-kinded types. F..ωF^\omega_{..} extends F<:ωF^\omega_{<:} with interval kinds, which afford a unified treatment of important type- and kind-level abstraction mechanisms found in Scala, such as bounded quantification, bounded operator abstractions, translucent type definitions and first-class subtyping constraints. The result is a flexible and general theory of higher-order subtyping. We prove type and kind safety of F..ωF^\omega_{..}, as well as weak normalization of types and undecidability of subtyping. All our proofs are mechanized in Agda using a fully syntactic approach based on hereditary substitution.Comment: 73 pages; to be presented at the 26th ACM SIGPLAN International Conference on Functional Programming (ICFP 2021), 22-27 August 202

    Higher-Order Subtyping with Type Intervals

    Get PDF
    Modern, statically typed programming languages provide various abstraction facilities at both the term- and type-level. Common abstraction mechanisms for types include parametric polymorphism -- a hallmark of functional languages -- and subtyping -- which is pervasive in object-oriented languages. Additionally, both kinds of languages may allow parametrized (or generic) datatype definitions in modules or classes. When several of these features are present in the same language, new and more expressive combinations arise, such as (1) bounded quantification, (2) bounded operator abstractions and (3) translucent type definitions. An example of such a language is Scala, which features all three of the aforementioned type-level constructs. This increases the expressivity of the language, but also the complexity of its type system. From a theoretical point of view, the various abstraction mechanisms have been studied through different extensions of Girard's higher-order polymorphic lambda-calculus F-omega. Higher-order subtyping and bounded polymorphism (1 and 2) have been formalized in F-omega-sub and its many variants; type definitions of various degrees of opacity (3) have been formalized through extensions of F-omega with singleton types. In this dissertation, I propose type intervals as a unifying concept for expressing (1--3) and other related constructs. In particular, I develop an extension of F-omega with interval kinds as a formal theory of higher-order subtyping with type intervals, and show how the familiar concepts of higher-order bounded quantification, bounded operator abstraction and singleton kinds can all be encoded in a semantics-preserving way using interval kinds. Going beyond the status quo, the theory is expressive enough to also cover less familiar constructs, such as lower-bounded operator abstractions and first-class, higher-order inequality constraints. I establish basic metatheoretic properties of the theory: I prove that subject reduction holds for well-kinded types w.r.t. full beta-reduction, that types and kinds are weakly normalizing, and that the theory is type safe w.r.t. its call-by-value operational reduction semantics. Key to this metatheoretic development is the use of hereditary substitution and the definition of an equivalent, canonical presentation of subtyping, which involves only normal types and kinds. The resulting metatheory is entirely syntactic, i.e. does not involve any model constructions, and has been fully mechanized in Agda. The extension of F-omega with interval kinds constitutes a stepping stone to the development of a higher-order version of the calculus of Dependent Object Types (DOT) -- the theoretical foundation of Scala's type system. In the last part of this dissertation, I briefly sketch a possible extension of the theory toward this goal and discuss some of the challenges involved in adapting the existing metatheory to that extension
    corecore