98 research outputs found

    The exp-log normal form of types

    Get PDF
    Lambda calculi with algebraic data types lie at the core of functional programming languages and proof assistants, but conceal at least two fundamental theoretical problems already in the presence of the simplest non-trivial data type, the sum type. First, we do not know of an explicit and implemented algorithm for deciding the beta-eta-equality of terms---and this in spite of the first decidability results proven two decades ago. Second, it is not clear how to decide when two types are essentially the same, i.e. isomorphic, in spite of the meta-theoretic results on decidability of the isomorphism. In this paper, we present the exp-log normal form of types---derived from the representation of exponential polynomials via the unary exponential and logarithmic functions---that any type built from arrows, products, and sums, can be isomorphically mapped to. The type normal form can be used as a simple heuristic for deciding type isomorphism, thanks to the fact that it is a systematic application of the high-school identities. We then show that the type normal form allows to reduce the standard beta-eta equational theory of the lambda calculus to a specialized version of itself, while preserving the completeness of equality on terms. We end by describing an alternative representation of normal terms of the lambda calculus with sums, together with a Coq-implemented converter into/from our new term calculus. The difference with the only other previously implemented heuristic for deciding interesting instances of eta-equality by Balat, Di Cosmo, and Fiore, is that we exploit the type information of terms substantially and this often allows us to obtain a canonical representation of terms without performing sophisticated term analyses

    Accumulating bindings

    Get PDF
    We give a Haskell implementation of Filinski’s normalisation by evaluation algorithm for the computational lambda-calculus with sums. Taking advantage of extensions to the GHC compiler, our implementation represents object language types as Haskell types and ensures that type errors are detected statically. Following Filinski, the implementation is parameterised over a residualising monad. The standard residualising monad for sums is a continuation monad. Defunctionalising the uses of the continuation monad we present the binding tree monad as an alternative.

    Normalisation by Evaluation in the Compilation of Typed Functional Programming Languages

    Get PDF
    This thesis presents a critical analysis of normalisation by evaluation as a technique for speeding up compilation of typed functional programming languages. Our investigation focuses on the SML.NET compiler and its typed intermediate language MIL. We implement and measure the performance of normalisation by evaluation for MIL across a range of benchmarks. Taking a different approach, we also implement and measure the performance of a graph-based shrinking reductions algorithm for SML.NET. MIL is based on Moggi’s computational metalanguage. As a stepping stone to normalisation by evaluation, we investigate strong normalisation of the computational metalanguage by introducing an extension of Girard-Tait reducibility. Inspired by previous work on local state and parametric polymorphism, we define reducibility for continuations and more generally reducibility for frame stacks. First we prove strong normalistion for the computational metalanguage. Then we extend that proof to include features of MIL such as sums and exceptions. Taking an incremental approach, we construct a collection of increasingly sophisticated normalisation by evaluation algorithms, culminating in a range of normalisation algorithms for MIL. Congruence rules and alpha-rules are captured by a compositional parameterised semantics. Defunctionalisation is used to eliminate eta-rules. Normalisation by evaluation for the computational metalanguage is introduced using a monadic semantics. Variants in which the monadic effects are made explicit, using either state or control operators, are also considered. Previous implementations of normalisation by evaluation with sums have relied on continuation-passing-syle or control operators. We present a new algorithm which instead uses a single reference cell and a zipper structure. This suggests a possible alternative way of implementing Filinski’s monadic reflection operations. In order to obtain benchmark results without having to take into account all of the features of MIL, we implement two different techniques for eliding language constructs. The first is not semantics-preserving, but is effective for assessing the efficiency of normalisation by evaluation algorithms. The second is semantics-preserving, but less flexible. In common with many intermediate languages, but unlike the computational metalanguage, MIL requires all non-atomic values to be named. We use either control operators or state to ensure each non-atomic value is named. We assess our normalisation by evaluation algorithms by comparing them with a spectrum of progressively more optimised, rewriting-based normalisation algorithms. The SML.NET front-end is used to generate MIL code from ML programs, including the SML.NET compiler itself. Each algorithm is then applied to the generated MIL code. Normalisation by evaluation always performs faster than the most naıve algorithms— often by orders of magnitude. Some of the algorithms are slightly faster than normalisation by evaluation. Closer inspection reveals that these algorithms are in fact defunctionalised versions of normalisation by evaluation algorithms. Our normalisation by evaluation algorithms perform unrestricted inlining of functions. Unrestricted inlining can lead to a super-exponential blow-up in the size of target code with respect to the source. Furthermore, the worst-case complexity of compilation with unrestricted inlining is non-elementary in the size of the source code. SML.NET alleviates both problems by using a restricted form of normalisation based on Appel and Jim’s shrinking reductions. The original algorithm is quadratic in the worst case. Using a graph-based representation for terms we implement a compositional linear algorithm. This speeds up the time taken to perform shrinking reductions by up to a factor of fourteen, which leads to an improvement of up to forty percent in total compile time

    Multi-Focusing on Extensional Rewriting with Sums

    Get PDF
    International audienceWe propose a logical justification for the rewriting-based equivalence procedure for simply-typed lambda-terms with sums of Lindley [Lin07]. It relies on maximally multi-focused proofs, a notion of canonical derivations introduced for linear logic. Lindley's rewriting closely corresponds to preemptive rewriting [CMS08], a technical device used in the meta-theory of maximal multi-focus

    Polarised Intermediate Representation of Lambda Calculus with Sums

    Get PDF
    Dec. 2015: see the added footnote on page 7International audienceThe theory of the λ-calculus with extensional sums is more complex than with only pairs and functions. We propose an untyped representation—an intermediate calculus—for the λ-calculus with sums, based on the following principles: 1) Computation is described as the reduction of pairs of an expression and a context; the context must be represented inside-out, 2) Operations are represented abstractly by their transition rule, 3) Positive and negative expressions are respectively eager and lazy; this polarity is an approximation of the type. We offer an introduction from the ground up to our approach, and we review the benefits.A structure of alternating phases naturally emerges through the study of normal forms, offering a reconstruction of focusing. Considering further purity assumption, we obtain maximal multi-focusing. As an application, we can deduce a syntax-directed algorithm to decide the equivalence of normal forms in the simply-typed λ-calculus with sums, and justify it with our intermediate calculus

    Normalization by Evaluation for Call-by-Push-Value and Polarized Lambda-Calculus

    Get PDF
    We observe that normalization by evaluation for simply-typed lambda-calculus with weak coproducts can be carried out in a weak bi-cartesian closed category of presheaves equipped with a monad that allows us to perform case distinction on neutral terms of sum type. The placement of the monad influences the normal forms we obtain: for instance, placing the monad on coproducts gives us eta-long beta-pi normal forms where pi refers to permutation of case distinctions out of elimination positions. We further observe that placing the monad on every coproduct is rather wasteful, and an optimal placement of the monad can be determined by considering polarized simple types inspired by focalization. Polarization classifies types into positive and negative, and it is sufficient to place the monad at the embedding of positive types into negative ones. We consider two calculi based on polarized types: pure call-by-push-value (CBPV) and polarized lambda-calculus, the natural deduction calculus corresponding to focalized sequent calculus. For these two calculi, we present algorithms for normalization by evaluation. We further discuss different implementations of the monad and their relation to existing normalization proofs for lambda-calculus with sums. Our developments have been partially formalized in the Agda proof assistant

    Modular Normalization with Types

    Get PDF
    With the increasing use of software in today’s digital world, software is becoming more and more complex and the cost of developing and maintaining software has skyrocketed. It has become pressing to develop software using effective tools that reduce this cost. Programming language research aims to develop such tools using mathematically rigorous foundations. A recurring and central concept in programming language research is normalization: the process of transforming a complex expression in a language to a canonical form while preserving its meaning. Normalization has compelling benefits in theory and practice, but is extremely difficult to achieve. Several program transformations that are used to optimise programs, prove properties of languages and check program equivalence, for instance, are after all instances of normalization, but they are seldom viewed as such.Viewed through the lens of current methods, normalization lacks the ability to be broken into sub-problems and solved independently, i.e., lacks modularity. To make matters worse, such methods rely excessively on the syntax of the language, making the resulting normalization algorithms brittle and sensitive to changes in the syntax. When the syntax of the language evolves due to modification or extension, as it almost always does in practice, the normalization algorithm may need to be revisited entirely. To circumvent these problems, normalization is currently either abandoned entirely or concrete instances of normalization are achieved using ad hoc means specific to a particular language. Continuing this trend in programming language research poses the risk of building on a weak foundation where languages either lack fundamental properties that follow from normalization or several concrete instances end up repeated in an ad hoc manner that lacks reusability.This thesis advocates for the use of type-directed Normalization by Evaluation (NbE) to develop normalization algorithms. NbE is a technique that provides an opportunity for a modular implementation of normalization algorithms by allowing us to disentangle the syntax of a language from its semantics. Types further this opportunity by allowing us to dissect a language into isolated fragments, such as functions and products, with an individual specification of syntax and semantics. To illustrate type-directed NbE in context, we develop NbE algorithms and show their applicability for typed programming language calculi in three different domains (modal types, static information-flow control and categorical combinators) and for a family of embedded-domain specific languages in Haskell

    Decidability for Non-Standard Conversions in Typed Lambda-Calculi

    Get PDF
    This thesis studies the decidability of conversions in typed lambda-calculi, along with the algorithms allowing for this decidability. Our study takes in consideration conversions going beyond the traditional beta, eta, or permutative conversions (also called commutative conversions). To decide these conversions, two classes of algorithms compete, the algorithms based on rewriting, here the goal is to decompose and orient the conversion so as to obtain a convergent system, these algorithms then boil down to rewrite the terms until they reach an irreducible forms; and the "reduction free" algorithms where the conversion is decided recursively by a detour via a meta-language. Throughout this thesis, we strive to explain the latter thanks to the former

    Be My Guest: Normalizing and Compiling Programs using a Host Language

    Get PDF
    In programming language research, normalization is a process of fundamental importance to the theory of computing and reasoning about programs.In practice, on the other hand, compilation is a process that transforms programs in a language to machine code, and thus makes the programminglanguage a usable one. In this thesis, we investigate means of normalizing and compiling programs in a language using another language as the "host".Leveraging a host to work with programs of a "guest" language enables reuse of the host\u27s features that would otherwise be strenuous to develop.The specific tools of interest are Normalization by Evaluation and Embedded Domain-Specific Languages, both of which rely on a host language for their purposes. These tools are applied to solve problems in three different domains: to show that exponentials (or closures) can be eliminated from a categorical combinatory calculus, to propose a new proof technique based on normalization for showing noninterference, and to enable the programming of resource-constrained IoT devices from Haskell
    • 

    corecore