36 research outputs found

    Down with variables

    Get PDF
    Techn. Report DI-PURe-05.06.01The subject of this paper is point-free functional programming in Haskell. By this we mean writing programs using categorically-inspired combinators, algebraic data types defined as fixed points of functors, and impicit recursion through the use of type-parameterized recursion patterns. This style of programming is appropriate for program calculation (reasoning about programs equationally), but difficult to actually use in practice - most programmers use a mixture of the above elements with explicit recursion and manipulation of arguments. In this paper we present a mechanism that allows programmers to convert classic point-wise code into point-free style, and a Haskell library that enables the direct execution of the resulting code. Together, they make possible the use of point-free either as a direct programming style or as a domain into which programs can be transformed before being subject to further manipulation

    Hfusion : a fusion tool based on acid rain plus extensions

    Get PDF
    When constructing programs, it is a usual practice to compose algorithms that solve simpler problems to solve a more complex one. This principle adapts so well to software development because it provides a structure to understand, design, reuse and test programs. In functional languages, algorithms are usually connected through the use of intermediate data structures, which carry the data from one algorithm to another one. The data structures impose a load on the algorithms to allocate, traverse and deallocate them. To alleviate this ine ciency, automatic program transformations have been studied, which produce equivalent programs that make less use of intermediate data structures. We present a set of automatic program transformation techniques based on algebraic laws known as Acid Rain. These techniques allow to remove intermediate data structures in programs containing primitive recursive functions, mutually recursive functions and functions with multiple recursive arguments. We also provide an experimental implementation of our techniques which allows their application on user supplied programs

    Point-free program calculation

    Get PDF
    Tese de doutoramento em Informática, ramo de Fundamentos da ComputaçãoDue to referential transparency, functional programming is particularly appropriate for equational reasoning. In this thesis we reason about functional programs by calculation, using a program calculus built upon two basic ingredients. The first is a set of recursion patterns that allow us to define recursive functions implicitly. These are encoded as higher-order operators that encapsulate typical forms of recursion, such as the well-known foldr operator on lists. The second is a point-free style of programming in which programs are expressed as combinations of simpler functions, without ever mentioning their arguments. The basic combinators are derived from standard categorical constructions, and are characterized by a rich set of equational laws. In order to be able to apply this calculational methodology to real lazy functional programming languages, a concrete category of partial functions and elements is used. While recursion patterns are already well accepted and a lot of research has been carried out on this topic, the same cannot be said about point-free programming. This thesis addresses precisely this component of the calculus. One of the contributions is a mechanism to translate classic pointwise code into the point-free style. This mechanism can be applied to a λ-calculus rich enough to represent the core functionality of a real functional programming language. A library that enables programming in a pure pointfree style within Haskell is also presented. This library is useful for understanding the expressions resulting from the above translation, since it allows their direct execution and, where applicable, the graphical visualization of recursion trees. Another contribution of the thesis is a framework for performing point-free calculations with higher-order functions. This framework is based on the internalization of some basic combinators, and considerably shortens calculations in this setting. In order to assess the viability of mechanizing calculations, several issues are discussed concerning the decidability of equality and the implementation of fusion laws.Devido à transparência referencial, o paradigma funcional é particularmente adequado ao raciocínio equacional. Nesta tese a manipulação de programas funcionais será feita por cálculo, sendo o cálculo de programas constituído por dois ingredientes fundamentais. O primeiro é um conjunto de padrões de recursividade que nos permite definir funções recursivas implicitamente. Estes padrões são codificados como operadores de ordem superior que ecapsulam formas de recursão típicas, tal como o bem conhecido operador foldr para listas. O segundo ingrediente é um estilo de programação “pointfree”, no qual os programas são definidos por combinação de funções mais simples sem nunca mencionar explicitamente os seus argumentos. Os combinadores fundamentais são derivados de construções categoriais padrão, e são caracterizados por um conjunto expressivo de leis equacionais. Para ser possível aplicar este método de cálculo a linguagens de programação funcional “lazy”, foi usada uma categoria concreta onde as funções e os elementos podem ser parciais. Ao contrário dos padrões de recursividade, que já são bem aceites e sobre os quais já se fez muita investigação, o mesmo não se pode dizer sobre a programação “point-free”. Esta tese aborda precisamente este componente do cálculo. Uma das contribuições é um mecanismo que permite traduzir código “pointwise” clássico para o estilo “point-free”. Este mecanismo pode ser aplicado a um λ-calculus suficientemente expressivo para representar a funcionalidade básica de uma linguagem de programação funcional real. Também se apresenta uma biblioteca que permite programar num estilo “point-free” puro dentro da linguagem Hankell. Esta biblioteca é útil para compreender as expressões que resultam da tradução acima referida, pois permite a sua execução directa e, quando aplicável, a visualização gráfica de árvores de recursividade. Outra contribuição da tese consiste numa metodologia para realizar cálculos “point-free” sobre funções de ordem superior. Esta metodologia é baseada na internalização de alguns combinadores fundamentais, e permite encurtar significativamente os cálculos. Para estabelecer a viabilidade de mecanização, também se discutem várias questões relacionadas com a decidibilidade da igualdade e a implementação de leis de fusão

    Automatically deriving cost models for structured parallel processes using hylomorphisms

    Get PDF
    This work has been partially supported by the EU Horizon 2020 grant “RePhrase: Refactoring Parallel Heterogeneous Resource-Aware Applications - a Software Engineering Approach” (ICT-644235), by COST Action IC1202 (TACLe), supported by COST (European Cooperation on Science and Technology), and by EPSRC grant EP/M027317/1 “C33: Scalable & Verified Shared Memory via Consistency-directed Cache Coherence”.Structured parallelism using nested algorithmic skeletons can greatly ease the task of writing parallel software, since common, but hard-to-debug, problems such as race conditions are eliminated by design. However, choosing the best combination of algorithmic skeletons to yield good parallel speedups for a specific program on a specific parallel architecture is still a difficult problem. This paper uses the unifying notion of hylomorphisms, a general recursion pattern, to make it possible to reason about both the functional correctness properties and the extra-functional timing properties of structured parallel programs. We have previously used hylomorphisms to provide a denotational semantics for skeletons, and proved that a given parallel structure for a program satisfies functional correctness. This paper expands on this theme, providing a simple operational semantics for algorithmic skeletons and a cost semantics that can be automatically derived from that operational semantics. We prove that both semantics are sound with respect to our previously defined denotational semantics. This means that we can now automatically and statically choose a provably optimal parallel structure for a given program with respect to a cost model for a (class of) parallel architecture. By deriving an automatic amortised analysis from our cost model, we can also accurately predict parallel runtimes and speedups.PostprintPeer reviewe

    Structured arrows : a type-based framework for structured parallelism

    Get PDF
    This thesis deals with the important problem of parallelising sequential code. Despite the importance of parallelism in modern computing, writing parallel software still relies on many low-level and often error-prone approaches. These low-level approaches can lead to serious execution problems such as deadlocks and race conditions. Due to the non-deterministic behaviour of most parallel programs, testing parallel software can be both tedious and time-consuming. A way of providing guarantees of correctness for parallel programs would therefore provide significant benefit. Moreover, even if we ignore the problem of correctness, achieving good speedups is not straightforward, since this generally involves rewriting a program to consider a (possibly large) number of alternative parallelisations. This thesis argues that new languages and frameworks are needed. These language and frameworks must not only support high-level parallel programming constructs, but must also provide predictable cost models for these parallel constructs. Moreover, they need to be built around solid, well-understood theories that ensure that: (a) changes to the source code will not change the functional behaviour of a program, and (b) the speedup obtained by doing the necessary changes is predictable. Algorithmic skeletons are parametric implementations of common patterns of parallelism that provide good abstractions for creating new high-level languages, and also support frameworks for parallel computing that satisfy the correctness and predictability requirements that we require. This thesis presents a new type-based framework, based on the connection between structured parallelism and structured patterns of recursion, that provides parallel structures as type abstractions that can be used to statically parallelise a program. Specifically, this thesis exploits hylomorphisms as a single, unifying construct to represent the functional behaviour of parallel programs, and to perform correct code rewritings between alternative parallel implementations, represented as algorithmic skeletons. This thesis also defines a mechanism for deriving cost models for parallel constructs from a queue-based operational semantics. In this way, we can provide strong static guarantees about the correctness of a parallel program, while simultaneously achieving predictable speedups.“This work was supported by the University of St Andrews (School of Computer Science); by the EU FP7 grant “ParaPhrase:Parallel Patterns Adaptive Heterogeneous Multicore Systems” (n. 288570); by the EU H2020 grant “RePhrase: Refactoring Parallel Heterogeneous Resource-Aware Applications - a Software Engineering Approach” (ICT-644235), by COST Action IC1202 (TACLe), supported by COST (European Cooperation Science and Technology); and by EPSRC grant “Discovery: Pattern Discovery and Program Shaping for Manycore Systems” (EP/P020631/1)” -- Acknowledgement

    Point-free program transformation

    Get PDF
    Functional programs are particularly well suited to formal manipulation by equational reasoning. In particular, it is straightforward to use calculational methods for program transformation. Well-known transformation techniques, like tupling or the introduction of accumulating parameters, can be implemented using calculation through the use of the fusion (or promotion) strategy. In this paper we revisit this transformation method, but, unlike most of the previous work on this subject, we adhere to a pure point-free calculus that emphasizes the advantages of equational reasoning. We focus on the accumulation strategy initially proposed by Bird, where the transformed programs are seen as higher-order folds calculated systematically from a specification. The machinery of the calculus is expanded with higher-order point-free operators that simplify the calculations. A substantial number of examples (both classic and new) are fully developed, and we introduce several shortcut optimization rules that capture typical transformation patterns.PresidĂŞncia do Conselho de Ministros - POSI/ICHS/44304/2002

    Pattern discovery for parallelism in functional languages

    Get PDF
    No longer the preserve of specialist hardware, parallel devices are now ubiquitous. Pattern-based approaches to parallelism, such as algorithmic skeletons, simplify traditional low-level approaches by presenting composable high-level patterns of parallelism to the programmer. This allows optimal parallel configurations to be derived automatically, and facilitates the use of different parallel architectures. Moreover, parallel patterns can be swap-replaced for sequential recursion schemes, thus simplifying their introduction. Unfortunately, there is no guarantee that recursion schemes are present in all functional programs. Automatic pattern discovery techniques can be used to discover recursion schemes. Current approaches are limited by both the range of analysable functions, and by the range of discoverable patterns. In this thesis, we present an approach based on program slicing techniques that facilitates the analysis of a wider range of explicitly recursive functions. We then present an approach using anti-unification that expands the range of discoverable patterns. In particular, this approach is user-extensible; i.e. patterns developed by the programmer can be discovered without significant effort. We present prototype implementations of both approaches, and evaluate them on a range of examples, including five parallel benchmarks and functions from the Haskell Prelude. We achieve maximum speedups of 32.93x on our 28-core hyperthreaded experimental machine for our parallel benchmarks, demonstrating that our approaches can discover patterns that produce good parallel speedups. Together, the approaches presented in this thesis enable the discovery of more loci of potential parallelism in pure functional programs than currently possible. This leads to more possibilities for parallelism, and so more possibilities to take advantage of the potential performance gains that heterogeneous parallel systems present
    corecore