139 research outputs found

    Modular, higher order cardinality analysis in theory and practice

    Get PDF
    Since the mid '80s, compiler writers for functional languages (especially lazy ones) have been writing papers about identifying and exploiting thunks and lambdas that are used only once. However, it has proved difficult to achieve both power and simplicity in practice. In this paper, we describe a new, modular analysis for a higher order language, which is both simple and effective. We prove the analysis sound with respect to a standard call-by-need semantics, and present measurements of its use in a full-scale, state-of-the-art optimising compiler. The analysis finds many single-entry thunks and one-shot lambdas and enables a number of program optimisations. This paper extends our preceding conference publication (Sergey et al. 2014 Proceedings of the 41st Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL 2014). ACM, pp. 335–348) with proofs, expanded report on evaluation and a detailed examination of the factors causing the loss of precision in the analysis

    Structure and Properties of Traces for Functional Programs

    Get PDF
    The tracer Hat records in a detailed trace the computation of a program written in the lazy functional language Haskell. The trace can then be viewed in various ways to support program comprehension and debugging. The trace was named the augmented redex trail. Its structure was inspired by standard graph rewriting implementations of functional languages. Here we describe a model of the trace that captures its essential properties and allows formal reasoning. The trace is a graph constructed by graph rewriting but goes beyond simple term graphs. Although the trace is a graph whose structure is independent of any rewriting strategy, we define the trace inductively, thus giving us a powerful method for proving its properties

    Specialising Parsers for Queries

    Get PDF
    Many software systems consist of data processing components that analyse large datasets to gather information and learn from these. Often, only part of the data is relevant for analysis. Data processing systems contain an initial preprocessing step that filters out the unwanted information. While efficient data analysis techniques and methodologies are accessible to non-expert programmers, data preprocessing seems to be forgotten, or worse, ignored. This despite real performance gains being possible by efficiently preprocessing data. Implementations of the data preprocessing step traditionally have to trade modularity for performance: to achieve the former, one separates the parsing of raw data and filtering it, and leads to slow programs because of the creation of intermediate objects during execution. The efficient version is a low-level implementation that interleaves parsing and querying. In this dissertation we demonstrate a principled and practical technique to convert the modular, maintainable program into its interleaved efficient counterpart. Key to achieving this objective is the removal, or deforestation, of intermediate objects in a program execution. We first show that by encoding data types using Böhm-Berarducci encodings (often referred to as Church encodings), and combining these with partial evaluation for function composition we achieve deforestation. This allows us to implement optimisations themselves as libraries, with minimal dependence on an underlying optimising compiler. Next we illustrate the applicability of this approach to parsing and preprocessing queries. The approach is general enough to cover top-down and bottom-up parsing techniques, and deforestation of pipelines of operations on lists and streams. We finally present a set of transformation rules that for a parser on a nested data format and a query on the structure, produces a parser specialised for the query. As a result we preserve the modularity of writing parsers and queries separately while also minimising resource usage. These transformation rules combine deforested implementations of both libraries to yield an efficient, interleaved result

    Lazy Evaluation: From natural semantics to a machine-checked compiler transformation

    Get PDF
    In order to solve a long-standing problem with list fusion, a new compiler transformation, \u27Call Arity\u27 is developed and implemented in the Haskell compiler GHC. It is formally proven to not degrade program performance; the proof is machine-checked using the interactive theorem prover Isabelle. To that end, a formalization of Launchbury`s Natural Semantics for Lazy Evaluation is modelled in Isabelle, including a correctness and adequacy proof

    Practical implementation of a dependently typed functional programming language

    Get PDF
    Types express a program's meaning, and checking types ensures that a program has the intended meaning. In a dependently typed programming language types are predicated on values, leading to the possibility of expressing invariants of a program's behaviour in its type. Dependent types allow us to give more detailed meanings to programs, and hence be more confident of their correctness. This thesis considers the practical implementation of a dependently typed programming language, using the Epigram notation defined by McBride and McKinna. Epigram is a high level notation for dependently typed functional programming elaborating to a core type theory based on Lu๙s UTT, using Dybjer's inductive families and elimination rules to implement pattern matching. This gives us a rich framework for reasoning about programs. However, a naive implementation introduces several run-time overheads since the type system blurs the distinction between types and values; these overheads include the duplication of values, and the storage of redundant information and explicit proofs. A practical implementation of any programming language should be as efficient as possible; in this thesis we see how the apparent efficiency problems of dependently typed programming can be overcome and that in many cases the richer type information allows us to apply optimisations which are not directly available in traditional languages. I introduce three storage optimisations on inductive families; forcing, detagging and collapsing. I further introduce a compilation scheme from the core type theory to G-machine code, including a pattern matching compiler for elimination rules and a compilation scheme for efficient run-time implementation of Peano's natural numbers. We also see some low level optimisations for removal of identity functions, unused arguments and impossible case branches. As a result, we see that a dependent type theory is an effective base on which to build a feasible programming language

    Hardware Acceleration Using Functional Languages

    Get PDF
    Cílem této práce je prozkoumat možnosti využití funkcionálního paradigmatu pro hardwarovou akceleraci, konkrétně pro datově paralelní úlohy. Úroveň abstrakce tradičních jazyků pro popis hardwaru, jako VHDL a Verilog, přestáví stačit. Pro popis na algoritmické či behaviorální úrovni se rozmáhají jazyky původně navržené pro vývoj softwaru a modelování, jako C/C++, SystemC nebo MATLAB. Funkcionální jazyky se s těmi imperativními nemůžou měřit v rozšířenosti a oblíbenosti mezi programátory, přesto je předčí v mnoha vlastnostech, např. ve verifikovatelnosti, schopnosti zachytit inherentní paralelismus a v kompaktnosti kódu. Pro akceleraci datově paralelních výpočtů se často používají jednotky FPGA, grafické karty (GPU) a vícejádrové procesory. Praktická část této práce rozšiřuje existující knihovnu Accelerate pro počítání na grafických kartách o výstup do VHDL. Accelerate je možno chápat jako doménově specifický jazyk vestavěný do Haskellu s backendem pro prostředí NVIDIA CUDA. Rozšíření pro vysokoúrovňovou syntézu obvodů ve VHDL představené v této práci používá stejný jazyk a frontend.The aim of this thesis is to research how the functional paradigm can be used for hardware acceleration with an emphasis on data-parallel tasks. The level of abstraction of the traditional hardware description languages, such as VHDL or Verilog, is becoming to low. High-level languages from the domains of software development and modeling, such as C/C++, SystemC or MATLAB, are experiencing a boom for hardware description on the algorithmic or behavioral level. Functional Languages are not so commonly used, but they outperform imperative languages in verification, the ability to capture inherent paralellism and the compactness of code. Data-parallel task are often accelerated on FPGAs, GPUs and multicore processors. In this thesis, we use a library for general-purpose GPU programs called Accelerate and extend it to produce VHDL. Accelerate is a domain-specific language embedded into Haskell with a backend for the NVIDIA CUDA platform. We use the language and its frontend, and create a new backend for high-level synthesis of circuits in VHDL.

    Deforestation for higher-order functional programs

    Get PDF
    Functional programming languages are an ideal medium for program optimisations based on source-to-source transformation techniques. Referential transparency affords opportunities for a wide range of correctness-preserving transformations leading to potent optimisation strategies. This thesis builds on deforestation, a program transformation technique due to Wadler that removes intermediate data structures from first-order functional programs. Our contribution is to reformulate deforestation for higher-order functional programming languages, and to show that the resulting algorithm terminates given certain syntactic and typing constraints on the input. These constraints are entirely reasonable, indeed it is possible to translate any typed program into the required syntactic form. We show how this translation can be performed automatically and optimally. The higher-order deforestation algorithm is transparent. That is, it is possible to determine by examination of the source program where the optimisation will be applicable. We also investigate the relationship of deforestation to cut-elimination, the normalisation property for the logic of sequent calculus. By combining a cut-elimination algorithm and first-order deforestation, we derive an improved higher-order deforestation algorithm. The higher-order deforestation algorithm has been implemented in the Glasgow Haskell Compiler. We describe how deforestation fits into the framework of Haskell, and design a model for the implementation that allows automatic list removal, with additional deforestation being performed on the basis of programmer supplied annotations. Results from applying the deforestation implementation to several example Haskell programs are given

    The productivity of polymorphic stream equations and the composition of circular traversals

    Get PDF
    This thesis has two independent parts concerned with different aspects of laziness in functional programs. The first part is a theoretical study of productivity for very restricted stream programs. In the second part we define a programming abstraction over a recursive pattern for defining circular traversals modularly. Productivity is in general undecidable. By restricting ourselves to mutually recursive polymorphic stream equations having only three basic operations, namely "head", "tail", and "cons", we aim to prove interesting properties about productivity. Still undecidable for this restricted class of programs, productivity of polymorphic stream functions is equivalent to the totality of their indexing function, which characterise their behaviour in terms of operations on indices. We prove that our equations generate all possible polymorphic stream functions, and therefore their indexing functions are all the computable functions, whose totality problem is indeed undecidable. We then further restrict our language by reducing the numbers of equations and parameters, but despite those constraints the equations retain their expressiveness. In the end we establish that even two non-mutually recursive equations on unary stream functions are undecidable with complexity Π20Π_2^0. However, the productivity of a single unary equation is decidable. Circular traversals have been used in the eighties as an optimisation to combine multiple traversals in a single traversal. In particular they provide more opportunities for applying deforestation techniques since it is the case that an intermediate datastructure can only be eliminated if it is consumed only once. Another use of circular programs is in the implementation of attribute grammars in lazy functional languages. There is a systematic transformation to define a circular traversal equivalent to multiple traversals. Programming with this technique is not modular since the individual traversals are merged together. Some tools exist to transform programs automatically and attribute grammars have been suggested as a way to describe the circular traversals modularly. Going to the root of the problem, we identify a recursive pattern that allows us to define circular programs modularly in a functional style. We give two successive implementations, the first one is based on algebras and has limited scope: not all circular traversals can be defined this way. We show that the recursive scheme underlying attribute grammars computation rules is essential to combine circular programs. We implement a generic recursive operation on a novel attribute grammar abstraction, using containers as a parametric generic representation of recursive datatypes. The abstraction makes attribute grammars first-class objects. Such a strongly typed implementation is novel and make it possible to implement a high level embedded language for defining attribute grammars, with many interesting new features promoting modularity

    A parallel functional language compiler for message-passing multicomputers

    Get PDF
    The research presented in this thesis is about the design and implementation of Naira, a parallel, parallelising compiler for a rich, purely functional programming language. The source language of the compiler is a subset of Haskell 1.2. The front end of Naira is written entirely in the Haskell subset being compiled. Naira has been successfully parallelised and it is the largest successfully parallelised Haskell program having achieved good absolute speedups on a network of SUN workstations. Having the same basic structure as other production compilers of functional languages, Naira's parallelisation technology should carry forward to other functional language compilers. The back end of Naira is written in C and generates parallel code in the C language which is envisioned to be run on distributed-memory machines. The code generator is based on a novel compilation scheme specified using a restricted form of Milner's 7r-calculus which achieves asynchronous communication. We present the first working implementation of this scheme on distributed-memory message-passing multicomputers with split-phase transactions. Simulated assessment of the generated parallel code indicates good parallel behaviour. Parallelism is introduced using explicit, advisory user annotations in the source' program and there are two major aspects of the use of annotations in the compiler. First, the front end of the compiler is parallelised so as to improve its efficiency at compilation time when it is compiling input programs. Secondly, the input programs to the compiler can themselves contain annotations based on which the compiler generates the multi-threaded parallel code. These, therefore, make Naira, unusually and uniquely, both a parallel and a parallelising compiler. We adopt a medium-grained approach to granularity where function applications form the unit of parallelism and load distribution. We have experimented with two different task distribution strategies, deterministic and random, and have also experimented with thread-based and quantum- based scheduling policies. Our experiments show that there is little efficiency difference for regular programs but the quantum-based scheduler is the best in programs with irregular parallelism. The compiler has been successfully built, parallelised and assessed using both idealised and realistic measurement tools: we obtained significant compilation speed-ups on a variety of simulated parallel architectures. The simulated results are supported by the best results obtained on real hardware for such a large program: we measured an absolute speedup of 2.5 on a network of 5 SUN workstations. The compiler has also been shown to have good parallelising potential, based on popular test programs. Results of assessing Naira's generated unoptimised parallel code are comparable to those produced by other successful parallel implementation projects
    corecore