10 research outputs found

    The Best of Both Worlds:Linear Functional Programming without Compromise

    Get PDF
    We present a linear functional calculus with both the safety guarantees expressible with linear types and the rich language of combinators and composition provided by functional programming. Unlike previous combinations of linear typing and functional programming, we compromise neither the linear side (for example, our linear values are first-class citizens of the language) nor the functional side (for example, we do not require duplicate definitions of compositions for linear and unrestricted functions). To do so, we must generalize abstraction and application to encompass both linear and unrestricted functions. We capture the typing of the generalized constructs with a novel use of qualified types. Our system maintains the metatheoretic properties of the theory of qualified types, including principal types and decidable type inference. Finally, we give a formal basis for our claims of expressiveness, by showing that evaluation respects linearity, and that our language is a conservative extension of existing functional calculi.Comment: Extended versio

    Compositional Reasoning for Explicit Resource Management in Channel-Based Concurrency

    Get PDF
    We define a pi-calculus variant with a costed semantics where channels are treated as resources that must explicitly be allocated before they are used and can be deallocated when no longer required. We use a substructural type system tracking permission transfer to construct coinductive proof techniques for comparing behaviour and resource usage efficiency of concurrent processes. We establish full abstraction results between our coinductive definitions and a contextual behavioural preorder describing a notion of process efficiency w.r.t. its management of resources. We also justify these definitions and respective proof techniques through numerous examples and a case study comparing two concurrent implementations of an extensible buffer.Comment: 51 pages, 7 figure

    Modular, higher order cardinality analysis in theory and practice

    Get PDF
    Since the mid '80s, compiler writers for functional languages (especially lazy ones) have been writing papers about identifying and exploiting thunks and lambdas that are used only once. However, it has proved difficult to achieve both power and simplicity in practice. In this paper, we describe a new, modular analysis for a higher order language, which is both simple and effective. We prove the analysis sound with respect to a standard call-by-need semantics, and present measurements of its use in a full-scale, state-of-the-art optimising compiler. The analysis finds many single-entry thunks and one-shot lambdas and enables a number of program optimisations. This paper extends our preceding conference publication (Sergey et al. 2014 Proceedings of the 41st Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL 2014). ACM, pp. 335–348) with proofs, expanded report on evaluation and a detailed examination of the factors causing the loss of precision in the analysis

    Effective Bug Finding

    Get PDF

    Lazy Evaluation: From natural semantics to a machine-checked compiler transformation

    Get PDF
    In order to solve a long-standing problem with list fusion, a new compiler transformation, \u27Call Arity\u27 is developed and implemented in the Haskell compiler GHC. It is formally proven to not degrade program performance; the proof is machine-checked using the interactive theorem prover Isabelle. To that end, a formalization of Launchbury`s Natural Semantics for Lazy Evaluation is modelled in Isabelle, including a correctness and adequacy proof

    Lazy Evaluation: From natural semantics to a machine-checked compiler transformation

    Get PDF
    In order to solve a long-standing problem with list fusion, a new compiler transformation, “Call Arity” is developed and implemented in the Haskell compiler GHC. It is formally proven to not degrade program performance; the proof is machine-checked using the interactive theorem prover Isabelle. To that end, a formalization of Launchbury’s Natural Semantics for Lazy Evaluation is modelled in Isabelle, including a correctness and adequacy proof
    corecore