702 research outputs found

    A Rational Deconstruction of Landin's SECD Machine with the J Operator

    Full text link
    Landin's SECD machine was the first abstract machine for applicative expressions, i.e., functional programs. Landin's J operator was the first control operator for functional languages, and was specified by an extension of the SECD machine. We present a family of evaluation functions corresponding to this extension of the SECD machine, using a series of elementary transformations (transformation into continu-ation-passing style (CPS) and defunctionalization, chiefly) and their left inverses (transformation into direct style and refunctionalization). To this end, we modernize the SECD machine into a bisimilar one that operates in lockstep with the original one but that (1) does not use a data stack and (2) uses the caller-save rather than the callee-save convention for environments. We also identify that the dump component of the SECD machine is managed in a callee-save way. The caller-save counterpart of the modernized SECD machine precisely corresponds to Thielecke's double-barrelled continuations and to Felleisen's encoding of J in terms of call/cc. We then variously characterize the J operator in terms of CPS and in terms of delimited-control operators in the CPS hierarchy. As a byproduct, we also present several reduction semantics for applicative expressions with the J operator, based on Curien's original calculus of explicit substitutions. These reduction semantics mechanically correspond to the modernized versions of the SECD machine and to the best of our knowledge, they provide the first syntactic theories of applicative expressions with the J operator

    Typeful Normalization by Evaluation

    Get PDF
    We present the first typeful implementation of Normalization by Evaluation for the simply typed lambda-calculus with sums and control operators: we guarantee type preservation and eta-long (modulo commuting conversions), beta-normal forms using only Generalized Algebraic Data Types in a general-purpose programming language, here OCaml; and we account for sums and control operators with Continuation-Passing Style. First, we implement the standard NbE algorithm for the implicational fragment in a typeful way that is correct by construction. We then derive its call-by-value continuation-passing counterpart, that maps a lambda-term with sums and call/cc into a CPS term in normal form, which we express in a typed dedicated syntax. Beyond showcasing the expressive power of GADTs, we emphasize that type inference gives a smooth way to re-derive the encodings of the syntax and typing of normal forms in Continuation-Passing Style

    A modular structural operational semantics for delimited continuations

    Get PDF
    It has been an open question as to whether the Modular Structural Operational Semantics framework can express the dynamic semantics of call/cc. This paper shows that it can, and furthermore, demonstrates that it can express the more general delimited control operators control and shift

    Programming with Algebraic Effects and Handlers

    Full text link
    Eff is a programming language based on the algebraic approach to computational effects, in which effects are viewed as algebraic operations and effect handlers as homomorphisms from free algebras. Eff supports first-class effects and handlers through which we may easily define new computational effects, seamlessly combine existing ones, and handle them in novel ways. We give a denotational semantics of eff and discuss a prototype implementation based on it. Through examples we demonstrate how the standard effects are treated in eff, and how eff supports programming techniques that use various forms of delimited continuations, such as backtracking, breadth-first search, selection functionals, cooperative multi-threading, and others

    Specific "scientific" data structures, and their processing

    Full text link
    Programming physicists use, as all programmers, arrays, lists, tuples, records, etc., and this requires some change in their thought patterns while converting their formulae into some code, since the "data structures" operated upon, while elaborating some theory and its consequences, are rather: power series and Pad\'e approximants, differential forms and other instances of differential algebras, functionals (for the variational calculus), trajectories (solutions of differential equations), Young diagrams and Feynman graphs, etc. Such data is often used in a [semi-]numerical setting, not necessarily "symbolic", appropriate for the computer algebra packages. Modules adapted to such data may be "just libraries", but often they become specific, embedded sub-languages, typically mapped into object-oriented frameworks, with overloaded mathematical operations. Here we present a functional approach to this philosophy. We show how the usage of Haskell datatypes and - fundamental for our tutorial - the application of lazy evaluation makes it possible to operate upon such data (in particular: the "infinite" sequences) in a natural and comfortable manner.Comment: In Proceedings DSL 2011, arXiv:1109.032

    Maximal Sharing in the Lambda Calculus with letrec

    Full text link
    Increasing sharing in programs is desirable to compactify the code, and to avoid duplication of reduction work at run-time, thereby speeding up execution. We show how a maximal degree of sharing can be obtained for programs expressed as terms in the lambda calculus with letrec. We introduce a notion of `maximal compactness' for lambda-letrec-terms among all terms with the same infinite unfolding. Instead of defined purely syntactically, this notion is based on a graph semantics. lambda-letrec-terms are interpreted as first-order term graphs so that unfolding equivalence between terms is preserved and reflected through bisimilarity of the term graph interpretations. Compactness of the term graphs can then be compared via functional bisimulation. We describe practical and efficient methods for the following two problems: transforming a lambda-letrec-term into a maximally compact form; and deciding whether two lambda-letrec-terms are unfolding-equivalent. The transformation of a lambda-letrec-term LL into maximally compact form L0L_0 proceeds in three steps: (i) translate L into its term graph G=[[L]]G = [[ L ]]; (ii) compute the maximally shared form of GG as its bisimulation collapse G0G_0; (iii) read back a lambda-letrec-term L0L_0 from the term graph G0G_0 with the property [[L0]]=G0[[ L_0 ]] = G_0. This guarantees that L0L_0 and LL have the same unfolding, and that L0L_0 exhibits maximal sharing. The procedure for deciding whether two given lambda-letrec-terms L1L_1 and L2L_2 are unfolding-equivalent computes their term graph interpretations [[L1]][[ L_1 ]] and [[L2]][[ L_2 ]], and checks whether these term graphs are bisimilar. For illustration, we also provide a readily usable implementation.Comment: 18 pages, plus 19 pages appendi

    Sur un Exemple de Patrick Greussay

    Get PDF
    This note was written at the occasion of the retirement of Jean-Francois Perrot at the Universite Pierre et Marie Curie (Paris VI). In an attempt to emulate his academic spirit, we revisit an example proposed by Patrick Greussay in his doctoral thesis: how to verify in sublinear time whether a Calder mobile is well balanced. Rather than divining one solution or another, we derive a spectrum of solutions, starting from the original specification of the problem. We also prove their correctness

    Pragmatic Aspects of Type-Directed Partial Evaluation

    Get PDF
    Type-directed partial evaluation stems from the residualization of static values in dynamic contexts, given their type and the type of their free variables. Its algorithm coincides with the algorithm for coercinga subtype value into a supertype value, which itself coincides withBerger and Schwichtenberg's normalization algorithm for the simply typed lambda-calculus. Type-directed partial evaluation thus can be used to specialize a compiled, closed program, given its type.Since Similix, let-insertion is a cornerstone of partial evaluators for call-by-value procedural languages with computational effects (such as divergence). It prevents the duplication of residual computations, and more generally maintains the order of dynamic side effects in the residual program. This article describes the extension of type-directed partial evaluationto insert residual let expressions. This extension requires theuser to annotate arrow types with effect information. It is achieved by delimiting and abstracting control, comparably to continuation-based specialization in direct style. It enables type-directed partial evaluation of programs with effects (e.g., a definitional lambda-interpreter for an imperative language) that are in direct style. The residual programsare in A-normal form. A simple corollary yields CPS (continuation-passing style) terms instead. We illustrate both transformations with two interpreters for Paulson's Tiny language, a classical example in partial evaluation

    A Rational Deconstruction of Landin's SECD Machine

    Get PDF
    Landin's SECD machine was the first abstract machine for the lambda-calculus viewed as a programming language. Both theoretically as a model of computation and practically as an idealized implementation, it has set the tone for the subsequent development of abstract machines for functional programming languages. However, and even though variants of the SECD machine have been presented, derived, and invented, the precise rationale for its architecture and modus operandi has remained elusive. In this article, we deconstruct the SECD machine into a lambda-interpreter, i.e., an evaluation function, and we reconstruct lambda-interpreters into a variety of SECD-like machines. The deconstruction and reconstructions are transformational: they are based on equational reasoning and on a combination of simple program transformations--mainly closure conversion, transformation into continuation-passing style, and defunctionalization. The evaluation function underlying the SECD machine provides a precise rationale for its architecture: it is an environment-based eval-apply evaluator with a callee-save strategy for the environment, a data stack of intermediate results, and a control delimiter. Each of the components of the SECD machine (stack, environment, control, and dump) is therefore rationalized and so are its transitions. The deconstruction and reconstruction method also applies to other abstract machines and other evaluation functions. It makes it possible to systematically extract the denotational content of an abstract machine in the form of a compositional evaluation function, and the (small-step) operational content of an evaluation function in the form of an abstract machine

    Towards Compatible and Interderivable Semantic Specifications for the Scheme Programming Language, Part I: Denotational Semantics, Natural Semantics, and Abstract Machines

    Get PDF
    We derive two big-step abstract machines, a natural semantics, and the valuation function of a denotational semantics based on the small-step abstract machine for Core Scheme presented by Clinger at PLDI'98. Starting from a functional implementation of this small-step abstract machine, (1) we fuse its transition function with its driver loop, obtaining the functional implementation of a big-step abstract machine; (2) we adjust this big-step abstract machine so that it is in defunctionalized form, obtaining the functional implementation of a second big-step abstract machine; (3) we refunctionalize this adjusted abstract machine, obtaining the functional implementation of a natural semantics in continuation style; and (4) we closure-unconvert this natural semantics, obtaining a compositional continuation-passing evaluation function which we identify as the functional implementation of a denotational semantics in continuation style. We then compare this valuation function with that of Clinger's original denotational semantics of Scheme
    corecore