47,595 research outputs found

    An Environment for Analyzing Space Optimizations in Call-by-Need Functional Languages

    Full text link
    We present an implementation of an interpreter LRPi for the call-by-need calculus LRP, based on a variant of Sestoft's abstract machine Mark 1, extended with an eager garbage collector. It is used as a tool for exact space usage analyses as a support for our investigations into space improvements of call-by-need calculi.Comment: In Proceedings WPTE 2016, arXiv:1701.0023

    Parametric polymorphism and operational improvement

    Get PDF
    Parametricity, in both operational and denotational forms, has long been a useful tool for reasoning about program correctness. However, there is as yet no comparable technique for reasoning about program improvement, that is, when one program uses fewer resources than another. Existing theories of parametricity cannot be used to address this problem as they are agnostic with regard to resource usage. This article addresses this problem by presenting a new operational theory of parametricity that is sensitive to time costs, which can be used to reason about time improvement properties. We demonstrate the applicability of our theory by showing how it can be used to prove that a number of well-known program fusion techniques are time improvements, including fixed point fusion, map fusion and short cut fusion

    On generic context lemmas for lambda calculi with sharing

    Get PDF
    This paper proves several generic variants of context lemmas and thus contributes to improving the tools to develop observational semantics that is based on a reduction semantics for a language. The context lemmas are provided for may- as well as two variants of mustconvergence and a wide class of extended lambda calculi, which satisfy certain abstract conditions. The calculi must have a form of node sharing, e.g. plain beta reduction is not permitted. There are two variants, weakly sharing calculi, where the beta-reduction is only permitted for arguments that are variables, and strongly sharing calculi, which roughly correspond to call-by-need calculi, where beta-reduction is completely replaced by a sharing variant. The calculi must obey three abstract assumptions, which are in general easily recognizable given the syntax and the reduction rules. The generic context lemmas have as instances several context lemmas already proved in the literature for specific lambda calculi with sharing. The scope of the generic context lemmas comprises not only call-by-need calculi, but also call-by-value calculi with a form of built-in sharing. Investigations in other, new variants of extended lambda-calculi with sharing, where the language or the reduction rules and/or strategy varies, will be simplified by our result, since specific context lemmas are immediately derivable from the generic context lemma, provided our abstract conditions are met

    Phobos: A front-end approach to extensible compilers (long version)

    Get PDF
    This paper describes a practical approach for implementing certain types of domain-specific languages with extensible compilers. Given a compiler with one or more front-end languages, we introduce the idea of a "generic" front-end that allows the syntactic and semantic specification of domain-specific languages. Phobos, our generic front-end, offers modular language specification, allowing the programmer to define new syntax and semantics incrementally

    Formally Specifying and Proving Operational Aspects of Forensic Lucid in Isabelle

    Get PDF
    A Forensic Lucid intensional programming language has been proposed for intensional cyberforensic analysis. In large part, the language is based on various predecessor and codecessor Lucid dialects bound by the higher-order intensional logic (HOIL) that is behind them. This work formally specifies the operational aspects of the Forensic Lucid language and compiles a theory of its constructs using Isabelle, a proof assistant system.Comment: 23 pages, 3 listings, 3 figures, 1 table, 1 Appendix with theorems, pp. 76--98. TPHOLs 2008 Emerging Trends Proceedings, August 18-21, Montreal, Canada. Editors: Otmane Ait Mohamed and Cesar Munoz and Sofiene Tahar. The individual paper's PDF is at http://users.encs.concordia.ca/~tphols08/TPHOLs2008/ET/76-98.pd

    Lazy unification with inductive simplification

    No full text
    Unification in the presence of an equational theory is an important problem in theorem-proving and in the integration of functional and logic programming languages. This paper presents an improvement of the proposed lazy unification methods by incorporating simplification with inductive axioms into the unification process. Inductive simplification reduces the search space so that in some case infinite search spaces are reduced to finite ones. Consequently, more efficient unification algorithms can be achieved. We prove soundness and completeness of our method for equational theories represented by ground confluent and terminating rewrite systems
    corecore