49 research outputs found

    Type-Inference Based Short Cut Deforestation (nearly) without Inlining

    Get PDF
    Deforestation optimises a functional program by transforming it into another one that does not create certain intermediate data structures. In [ICFP'99] we presented a type-inference based deforestation algorithm which performs extensive inlining. However, across module boundaries only limited inlining is practically feasible. Furthermore, inlining is a non-trivial transformation which is therefore best implemented as a separate optimisation pass. To perform short cut deforestation (nearly) without inlining, Gill suggested to split definitions into workers and wrappers and inline only the small wrappers, which transfer the information needed for deforestation. We show that Gill's use of a function build limits deforestation and note that his reasons for using build do not apply to our approach. Hence we develop a more general worker/wrapper scheme without build. We give a type-inference based algorithm which splits definitions into workers and wrappers. Finally, we show that we can deforest more expressions with the worker/wrapper scheme than the algorithm with inlining

    First-class polymorphism for ML

    Get PDF
    Polymorphism in ML is implicit: type variables are silently introduced and eliminated. The lack of an explicit declaration of type variables restricts the expressiveness of parameterised modules (functors). Certain polymorphic functions cannot be expressed as functors, because implicit type parameters of polymorphic functions are in one respect more powerful than formal type parameters of functors. The title suggests that this lack of expressiveness is due to a restricted ability to abstract --- polymorphism is restricted. Type variables can only be abstracted from value declarations, but not from other forms of declarations, especially not from structure declarations. The paper shows in the case of Standard ML how (syntax and) semantics can be modified to fill this language gap. This is not so much a question of programming language design as a contribution for better understanding the relationship between polymorphic functions, polymorphic types, and functors

    Translatability of schemas over restricted interpretations

    Get PDF
    We study a notion of translatability between classes of schemas when restrictions are placed on interpretations. Unlike the usual notion of translatability, our translatability lets the translation depend on the interpretation.We consider five classes of schemas. Three of these have been extensively studied in the literature: flow-chart, flow-chart with counters, and recursive. The two others are defined herein; the first of which is a class of “maximal power” and is equivalent to similarly motivated classes of other investigators; while the second is, in some sense, a nontrivial class of “minimal power”.Our main results specify restrictions on interpretations that will allow the translatability of a class into a class , not being translatable into over all (or unrestricted) interpretations. Additional results specify restrictions on interpretations under which is not translatable into ; the proofs of these clarify the mechanisms in the main results. Last, we consider the notion of effective computability in algebraic structures in the light of the main results

    A Linearization of the Lambda-Calculus and Consequences

    No full text
    this report is more significant for the methodology it develops than for the specific technical results it establishes. What we set up is a new, enlarged framework for the study of #-reduction. There is unavoidably a profusion of new definitions, but once these are understood, the technical results are not surprising and "work as they should". Finally, we point out that the present report is unfinished in many ways. Expediency is only partly the reason, as it seems more important in a first report to sketch the broad lines of a new methodology than to examine the implications in detail. We leave some questions unanswered (e.g. Conjecture 2.21), and some results proved only in outline (e.g. Lemma 4.4) or partially proved by methods not promoted in this report (e.g. Corollary 4.6). More important, we do not fully characterize typability in the type-inference systems defined in Section 4 (they do not assign types to all terms) and we leave wide open possible applications of our methodology to other questions (e.g. alternative proofs for the #-SN property of typed #-calculi). Acknowledgements Joe Wells played a crucial role in the early stages of the research, by proofreading numerous handwritten drafts and correcting many (sometimes serious) mistakes in them. Although other members of the Church Project will not always recognize the source of the inspiration, many of the ideas in this report are suggested by research they have conducted in recent months and presented in the weekly seminar

    Recursion Versus Iteration at Higher-Orders

    No full text
    . We extend the well-known analysis of recursion-removal in first-order program schemes to a higher-order language of finitely typed and polymorphically typed functional programs, the semantics of which is based on call-by-name parameter-passing. We introduce methods for recursion-removal, i.e. for translating higher-order recursive programs into higher-order iterative programs, and determine conditions under which this translation is possible. Just as finitely typed recursive programs are naturally classified by their orders, so are finitely typed iterative programs. This syntactic classification of recursive and iterative programs corresponds to a semantic (or computational) classification: the higher the order of programs, the more functions they can compute. 1 Background and Motivation Although our analysis is entirely theoretical, as it combines methods from typed -calculi, from abstract recursion theory and from denotational semantics, the problems we consider have a strong pra..

    Beta-Reduction As Unification

    No full text
    this report, we use a lean version of the usual system of intersection types, whichwe call . Hence, UP is also an appropriate unification problem to characterize typability of -terms in . Quite apart from the new light it sheds on fi-reduction, such an analysis turns out to have several other benefit
    corecore