801 research outputs found

    Strategic polymorphism requires just two combinators!

    Get PDF
    In previous work, we introduced the notion of functional strategies: first-class generic functions that can traverse terms of any type while mixing uniform and type-specific behaviour. Functional strategies transpose the notion of term rewriting strategies (with coverage of traversal) to the functional programming paradigm. Meanwhile, a number of Haskell-based models and combinator suites were proposed to support generic programming with functional strategies. In the present paper, we provide a compact and matured reconstruction of functional strategies. We capture strategic polymorphism by just two primitive combinators. This is done without commitment to a specific functional language. We analyse the design space for implementational models of functional strategies. For completeness, we also provide an operational reference model for implementing functional strategies (in Haskell). We demonstrate the generality of our approach by reconstructing representative fragments of the Strafunski library for functional strategies.Comment: A preliminary version of this paper was presented at IFL 2002, and included in the informal preproceedings of the worksho

    Type Classes for Lightweight Substructural Types

    Full text link
    Linear and substructural types are powerful tools, but adding them to standard functional programming languages often means introducing extra annotations and typing machinery. We propose a lightweight substructural type system design that recasts the structural rules of weakening and contraction as type classes; we demonstrate this design in a prototype language, Clamp. Clamp supports polymorphic substructural types as well as an expressive system of mutable references. At the same time, it adds little additional overhead to a standard Damas-Hindley-Milner type system enriched with type classes. We have established type safety for the core model and implemented a type checker with type inference in Haskell.Comment: In Proceedings LINEARITY 2014, arXiv:1502.0441

    Strategic polymorphism requires just two combinators!

    Get PDF
    In previous work, we introduced the notion of functional strategies: first-class generic functions that can traverse terms of any type while mixing uniform and type-specific behaviour. Functional strategies transpose the notion of term rewriting strategies (with coverage of traversal) to the functional programming paradigm. Meanwhile, a number of Haskell-based models and combinator suites were proposed to support generic programming with functional strategies. In the present paper, we provide a compact and matured reconstruction of functional strategies. We capture strategic polymorphism by just two primitive combinators. This is done without commitment to a specific functional language. We analyse the design space for implementational models of functional strategies. For completeness, we also provide an operational reference model for implementing functional strategies (in Haskell). We demonstrate the generality of our approach by reconstructing representative fragments of the Strafunski library for functional strategies

    Programming with Purity Reflection: Peaceful Coexistence of Effects, Laziness, and Parallelism

    Get PDF
    We present purity reflection, a programming language feature that enables higher-order functions to inspect the purity of their function arguments and to vary their behavior based on this information. The upshot is that operations on data structures can selectively use lazy and/or parallel evaluation while ensuring that side effects are never lost or re-ordered. The technique builds on a recent Hindley-Milner style type and effect system based on Boolean unification which supports both effect polymorphism and complete type inference. We illustrate that avoiding the so-called \u27poisoning problem\u27 is crucial to support purity reflection. We propose several new data structures that use purity reflection to switch between eager and lazy, sequential and parallel evaluation. We propose a DelayList, which is maximally lazy but switches to eager evaluation for impure operations. We also propose a DelayMap which is maximally lazy in its values, but also exploits eager and parallel evaluation. We implement purity reflection as an extension of the Flix programming language. We present a new effect-aware form of monomorphization that eliminates purity reflection at compile-time. And finally, we evaluate the cost of this new monomorphization on compilation time and on code size, and determine that it is minimal

    Forgetting Exceptions is Harmful in Language Learning

    Get PDF
    We show that in language learning, contrary to received wisdom, keeping exceptional training instances in memory can be beneficial for generalization accuracy. We investigate this phenomenon empirically on a selection of benchmark natural language processing tasks: grapheme-to-phoneme conversion, part-of-speech tagging, prepositional-phrase attachment, and base noun phrase chunking. In a first series of experiments we combine memory-based learning with training set editing techniques, in which instances are edited based on their typicality and class prediction strength. Results show that editing exceptional instances (with low typicality or low class prediction strength) tends to harm generalization accuracy. In a second series of experiments we compare memory-based learning and decision-tree learning methods on the same selection of tasks, and find that decision-tree learning often performs worse than memory-based learning. Moreover, the decrease in performance can be linked to the degree of abstraction from exceptions (i.e., pruning or eagerness). We provide explanations for both results in terms of the properties of the natural language processing tasks and the learning algorithms.Comment: 31 pages, 7 figures, 10 tables. uses 11pt, fullname, a4wide tex styles. Pre-print version of article to appear in Machine Learning 11:1-3, Special Issue on Natural Language Learning. Figures on page 22 slightly compressed to avoid page overloa

    Termination analysis based on operational semantics

    Get PDF
    In principle termination analysis is easy: find a well-founded partial order and prove that calls decrease with respect to this order. In practice this often requires an oracle (or a theorem prover) for determining the well-founded order and this oracle may not be easily implementable. Our approach circumvents some of these problems by exploiting the inductive definition of algebraic data types and using pattern matching as in functional languages. We develop a termination analysis for a higher-order functional language; the analysis incorporates and extends polymorphic type inference and axiomatizes a class of well-founded partial orders for multiple-argument functions (as in Standard ML and Miranda). Semantics is given by means of operational (natural-style) semantics and soundness is proved; this involves making extensions to the semantic universe and we relate this to the techniques of denotational semantics. For dealing with the partiality aspects of the soundness proof, it suffices to incorporate approximations to the desired fixed points; for dealing with the totality aspects of the soundness proof, we also have to incorporate functions that are forced to terminate (in a way that might violate the monotonicity of denotational semantics)
    • …
    corecore