17 research outputs found

    Compilation by Transformation in Non-Strict Functional Languages

    Get PDF
    In this thesis we present and analyse a set of automatic source-to-source program transformations that are suitable for incorporation in optimising compilers for lazy functional languages

    Common Subexpression Elimination in a Lazy Functional Language

    Get PDF
    Common subexpression elimination is a well-known compiler optimisation that saves time by avoiding the repetition of the same computation. To our knowledge it has not yet been applied to lazy functional programming languages, although there are several advantages. First, the referential transparency of these languages makes the identification of common subexpressions very simple. Second, more common subexpressions can be recognised because they can be of arbitrary type whereas standard common subexpression elimination only shares primitive values. However, because lazy functional languages decouple program structure from data space allocation and control flow, analysing its effects and deciding under which conditions the elimination of a common subexpression is beneficial proves to be quite difficult. We developed and implemented the transformation for the language Haskell by extending the Glasgow Haskell compiler and measured its effectiveness on real-world programs

    Maximal Sharing in the Lambda Calculus with letrec

    Full text link
    Increasing sharing in programs is desirable to compactify the code, and to avoid duplication of reduction work at run-time, thereby speeding up execution. We show how a maximal degree of sharing can be obtained for programs expressed as terms in the lambda calculus with letrec. We introduce a notion of `maximal compactness' for lambda-letrec-terms among all terms with the same infinite unfolding. Instead of defined purely syntactically, this notion is based on a graph semantics. lambda-letrec-terms are interpreted as first-order term graphs so that unfolding equivalence between terms is preserved and reflected through bisimilarity of the term graph interpretations. Compactness of the term graphs can then be compared via functional bisimulation. We describe practical and efficient methods for the following two problems: transforming a lambda-letrec-term into a maximally compact form; and deciding whether two lambda-letrec-terms are unfolding-equivalent. The transformation of a lambda-letrec-term LL into maximally compact form L0L_0 proceeds in three steps: (i) translate L into its term graph G=[[L]]G = [[ L ]]; (ii) compute the maximally shared form of GG as its bisimulation collapse G0G_0; (iii) read back a lambda-letrec-term L0L_0 from the term graph G0G_0 with the property [[L0]]=G0[[ L_0 ]] = G_0. This guarantees that L0L_0 and LL have the same unfolding, and that L0L_0 exhibits maximal sharing. The procedure for deciding whether two given lambda-letrec-terms L1L_1 and L2L_2 are unfolding-equivalent computes their term graph interpretations [[L1]][[ L_1 ]] and [[L2]][[ L_2 ]], and checks whether these term graphs are bisimilar. For illustration, we also provide a readily usable implementation.Comment: 18 pages, plus 19 pages appendi

    Systematic Intermediate Sequence Removal for Reduced Memory Accesses

    Get PDF
    Modern software applications are growing in complexity and demand very intensive use of data. Therefore, a wide variety of data structures are utilized to facilitate the storage and access to these vast amounts of computed information. Additionally, the need for reliable software design and the development of large applications following the object-oriented paradigm increase the amount of dynamic buffers and redundant accesses to the data stored in these buffers. In this paper, we propose a systematic, design optimization methodology to remove these intermediate dynamic buffers, thereby reducing the memory accesses of the targeted applications without altering the input-output behaviour of the algorithms. The reduction is focused on sequences and is especially relevant for embedded systems, which have limited on-chip communication bandwidth and the energy consumption of the memory subsystem is high, due to the energy consumption associated with each memory access. The effectiveness of the proposed methodology is assessed in a 3D reconstruction multimedia application and shows a significant reduction in memory accesses. In addition, the general trends for memory improvement and the scalability of our approach are supported as well by a parameterized benchmark set

    Systematic intermediate sequence removal for reduced memory accesses

    Full text link

    A Graph-Based Higher-Order Intermediate Representation

    Get PDF
    Abstract Many modern programming languages support both imperative and functional idioms. However, state-of-the-art imperative intermediate representations (IRs) cannot natively represent crucial functional concepts (like higher-order functions). On the other hand, functional IRs employ an explicit scope nesting, which is cumbersome to maintain across certain transformations. In this paper we present Thorin: a higher-order, functional IR based on continuation-passing style that abandons explicit scope nesting in favor of a dependency graph. This makes Thorin an attractive IR for both imperative as well as functional languages. Furthermore, we present a novel program transformation to eliminate the overhead caused by higherorder functions. The main component of this transformation is lambda mangling: an important transformation primitive in Thorin. We demonstrate that lambda mangling subsumes many classic program transformations like tail-recursion elimination, loop unrolling or (partial) inlining. In our experiments we show that higher-order programs translated with Thorin are consistently as fast as C programs

    A complete proof of the safety of NĂścker's strictness analysis

    Get PDF
    This paper proves correctness of NĂścker's method of strictness analysis, implemented in the Clean compiler, which is an effective way for strictness analysis in lazy functional languages based on their operational semantics. We improve upon the work of Clark, Hankin and Hunt did on the correctness of the abstract reduction rules. Our method fully considers the cycle detection rules, which are the main strength of NĂścker's strictness analysis. Our algorithm SAL is a reformulation of NĂścker's strictness analysis algorithm in a higher-order call-by-need lambda-calculus with case, constructors, letrec, and seq, extended by set constants like Top or Inf, denoting sets of expressions. It is also possible to define new set constants by recursive equations with a greatest fixpoint semantics. The operational semantics is a small-step semantics. Equality of expressions is defined by a contextual semantics that observes termination of expressions. Basically, SAL is a non-termination checker. The proof of its correctness and hence of NĂścker's strictness analysis is based mainly on an exact analysis of the lengths of normal order reduction sequences. The main measure being the number of 'essential' reductions in a normal order reduction sequence. Our tools and results provide new insights into call-by-need lambda-calculi, the role of sharing in functional programming languages, and into strictness analysis in general. The correctness result provides a foundation for NĂścker's strictness analysis in Clean, and also for its use in Haskell

    Realisierung der Ein-/Ausgabe in einem Compiler fur Haskell bei Verwendung einer nichtdeterministischen Semantik

    Get PDF
    Funktionale Programmiersprachen weisen viele Eigenschaften auf, die zur modernen Softwareentwicklung benotigt werden: Zuverlässigkeit, Modularisierung, Wiederverwendbarkeit und Verifizierbarkeit. Als schwer vereinbar mit diesen Sprachen stellte sich die Einbettung von Seiteneffekten in diese Sprachen heraus. Nach einigen mehr oder weniger gescheiterten Ansätzen hat sich mittlerweile fur die nichtstrikte funktionale Sprache Haskell der monadische Ansatz durchgesetzt, bei dem die Seiteneffekte geschickt verpackt werden, so dass zumindest IO-behaftete Teile eines Haskell-Programms dem klassischen imperativen Programmierstil ähneln. S. Peyton Jones bringt dieses in [Pey01, Seite 3] auf den Punkt, indem er schreibt "...Haskell is the world's finest imperative programming language." Dies ist einerseits vorteilhaft, denn die klassischen Programmiertechniken können angewendet werden, andererseits bedeutet dies auch eine Rückkehr zu altbekannten Problemen in diesen Sprachen: Der Programmcode wird unverständlicher, die Wiederverwendbarkeit von Code verschlechtert sich, Änderungen im Programm sind aufwendig. Zudem erscheint die monadische Programmierung teilweise umständlich. Deshalb wurde im Zuge der Entwicklung in fast jede Implementierung von Haskell ein Konstrukt eingebaut, dass von monadischem IO zu direktem IO führt. Da dieses nicht mit dem bisherigen Ansatz vereinbar schien, wird es als "unsafe" bezeichnet, die entsprechende Funktion heißt "unsafePerformIO". Zahlreiche Anwendungen benutzten diese Funktion, teilweise scheint die Verwendung zumindest aus Effizienzgründen unabdingbar. Allerdings ist die Frage, wann dieses verwendet werden darf, d.h. so angewendet wird, dass es nicht "unsicher" ist, nur unzureichend geklärt. Das Zitat in Abbildung 1.1 gibt wenig Aufschluss über die korrekte Anwendung, zumal immer wieder Diskussionen entstehen, ob die Verwendung von unsafePerformIO in diesem oder jenem Spezialfall korrekt ist

    HERMIT: Mechanized Reasoning during Compilation in the Glasgow Haskell Compiler

    Get PDF
    It is difficult to write programs which are both correct and fast. A promising approach, functional programming, is based on the idea of using pure, mathematical functions to construct programs. With effort, it is possible to establish a connection between a specification written in a functional language, which has been proven correct, and a fast implementation, via program transformation. When practiced in the functional programming community, this style of reasoning is still typically performed by hand, by either modifying the source code or using pen-and-paper. Unfortunately, performing such semi-formal reasoning by directly modifying the source code often obfuscates the program, and pen-and-paper reasoning becomes outdated as the program changes over time. Even so, this semi-formal reasoning prevails because formal reasoning is time-consuming, and requires considerable expertise. Formal reasoning tools often only work for a subset of the target language, or require programs to be implemented in a custom language for reasoning. This dissertation investigates a solution, called HERMIT, which mechanizes reasoning during compilation. HERMIT can be used to prove properties about programs written in the Haskell functional programming language, or transform them to improve their performance. Reasoning in HERMIT proceeds in a style familiar to practitioners of pen-and-paper reasoning, and mechanization allows these techniques to be applied to real-world programs with greater confidence. HERMIT can also re-check recorded reasoning steps on subsequent compilations, enforcing a connection with the program as the program is developed. HERMIT is the first system capable of directly reasoning about the full Haskell language. The design and implementation of HERMIT, motivated both by typical reasoning tasks and HERMIT's place in the Haskell ecosystem, is presented in detail. Three case studies investigate HERMIT's capability to reason in practice. These case studies demonstrate that semi-formal reasoning with HERMIT lowers the barrier to writing programs which are both correct and fast
    corecore