5 research outputs found

    Multiple intermediate structure deforestation by shortcut fusion

    Get PDF
    Shortcut fusion is a well-known optimization technique for functional programs. Its aim is to transform multi-pass algorithms into single pass ones, achieving deforestation of the intermediate structures that multi-pass algorithms need to construct. Shortcut fusion has already been extended in several ways. It can be applied to monadic programs, maintaining the global effects, and also to obtain circular and higher-order programs. The techniques proposed so far, however, only consider programs defined as the composition of a single producer with a single consumer. In this paper, we analyse shortcut fusion laws to deal with programs consisting of an arbitrary number of function compositions. (C) 2016 Elsevier B.V. All rights reserved.We would like to thank the anonymous reviewers for their detailed and helpful comments. This work was partially funded by ERDF - European Regional Development Fund through the COMPETE Programme (operational programme for competitiveness) and by National Funds through the FCT - Fundacao para a Ciencia e a Tecnologia (Portuguese Foundation for Science and Technology) within projects FCOMP-01-0124-FEDER-020532 and FCOMP-01-0124-FEDER-022701

    Shortcut fusion rules for the derivation of circular and higher-order programs

    Get PDF
    Functional programs often combine separate parts using intermediate data structures for communicating results. Programs so defined are modular, easier to understand and maintain, but suffer from inefficiencies due to the generation of those gluing data structures. To eliminate such redundant data structures, some program transformation techniques have been proposed. One such technique is shortcut fusion, and has been studied in the context of both pure and monadic functional programs. In this paper, we study several shortcut fusion extensions, so that, alternatively, circular or higher-order programs are derived. These extensions are also provided for effect-free programs and monadic ones. Our work results in a set of generic calculation rules, that are widely applicable, and whose correctness is formally established.Fundação para a Ciência e a Tecnologi

    Design, implementation and calculation of circular programs

    Get PDF
    Tese de doutoramento em Informática (ramo do Conhecimento Fundamentos da Computação)Circular programming is a powerful technique to express multiple traversal algorithms as a single traversal function in a lazy setting. Such a (virtual) circular program may contain circular definitions, that is, arguments of function calls that are also results of that same calls. Although circular definitions always induce non-termination under a strict evaluation mechanism, they can sometimes be immediately evaluated using a lazy evaluation strategy. The lazy engine is able to compute the right evaluation order, if that order exists. Indeed, using this style of circular programming, the programmer does not have to concern him/herself with the definition and the scheduling of the different traversal functions, since a single (traversal) function has to be defined. Moreover, because there is a single traversal function, the programmer does not have to define intermediate gluing data structures to convey values computed in one traversal and needed in following ones, either. In this Thesis, we present our studies on the design, implementation and calculation of circular programs. We start by developing techniques to transform circular programs into strict ones. Then, we introduce calculation rules to obtain circular programs from strict equivalents, both in the context of pure and monadic programming. Because we use calculation techniques we guarantee that the resulting circular programs are equivalent to the strict ones we start with. In this Thesis, we also perform a series of benchmarks comparing the running performances of circular programs and the programs we are able to derive from circular programs.A utilização de programas circulares na implementação de algoritmos de programação é uma técnica poderosa que permite, num paradigma lazy, implementar soluções que efectuam múltiplas travessias sobre uma ou mais estruturas de dados como um programa que efectua apenas uma travessia sobre uma única estrutura de dados. Num programa (virtualmente) circular podem ocorrer definições circulares, isto é, invocações de funções onde um argumento da invocação é, ao mesmo tempo, um resultado da mesma invocação. Embora este tipo de definições induza não terminação num paradigma estrito, a verdade é que, num paradigma lazy, elas podem ser desde logo executadas utilizando uma estratégia baseada em lazy evaluation: a máquina lazy é capaz de determinar o escalonamento correcto das computações, se ele existir. Na verdade, utilizando este método de programação, o(a) programador(a) não tem de definir nem de escalonar as diferentes travessias, uma vez que apenas uma função necessita de ser implementada. Para além disso, porque existe apenas função, e uma vez que essa função realiza apenas uma travessia, o(a) programador(a) também não é forçado a definir estruturas de dados intermédias para colar as diferentes travessias. Nesta Tese são apresentados os nossos estudos relativos ao desenho, implementação e cálculo de programas circulares. Começamos por desenvolver técnicas de transformação de programas circulares em programas estritos. Depois apresentamos regras de cálculo que permitem obter programas circulares a partir de estritos, equivalentes, tanto no contexto de funções puras como no contexto de funções monádicas. Uma vez que, neste trabalho, utilizamos técnicas de cálculo de programas, é possível garantir a correcção da transformação que propomos. Por fim, realizamos uma bateria de testes que permitem comparar a performance de programas circulares com a dos programas que derivamos a partir deles.Fundacão para a Ciência e Tecnologia (FCT)grant No. SFRH/BD/19186/200

    Fusión en presencia de acumuladores

    Get PDF
    En programación funcional es común escribir los programas como la composición de funciones relativamente más sencillas. Esto tiene todas las ventajas asociadas a un estilo de programación modular. Sin embargo los programas obtenidos pueden ver afectada su eficiencia debido a la generación de las estructuras de datos intermedias que son utilizadas como comunicación entre las funciones involucradas en las composiciones. Existe un conjunto de técnicas de transformación de programas, conocidas como fusión o deforestación, que apuntan a la eliminación de estas estructuras intermedias y permiten obtener definiciones más eficientes, equivalentes a las originales. El objetivo es posibilitar que el programador pueda continuar escribiendo sus programas de forma composicional y generar un código Equivalente, más eficiente, que sea el que efectivamente se ejecute. En este trabajo se considera el caso particular donde las funciones involucradas son tales que construyen sus resultados utilizando un parámetro adicional de acumulación. Las técnicas de fusión clásicas no suelen ser efectivas en el caso de estas funciones ya que no consiguen eliminar las estructuras intermedias cuando las mismas son generadas en los parámetros de acumulación. Como primera aproximación a una solución del problema se realiza una revisión del operador afold propuesto por Pardo, el cual permite la definición de funciones con acumuladores por recursión estructural sobre tipos polinomiales.Se presenta una modificación al operador que permite ampliar el conjunto de funciones capaz de representar. Posteriormente la tesis se concentra en la propuesta de un nuevo método de fusión basado en el método de "Short-Cut Fusion" que contempla el caso en que la función productora genera sus resultados utilizando parámetros de acumulación. Se muestra la aplicación del nuevo método a programas Haskell y se realiza una serie de pruebas que permiten comparar la performance desde el punto de vista del tiempo de ejecución y el espacio de memoria requerido entre las versiones originales y transformadas de un conjunto de programas ejemplo

    Type-Inference Based Deforestation of Functional Programs

    Get PDF
    In lazy functional programming modularity is often achieved by using intermediate data structures to combine separate parts of a program. Each intermediate data structure is produced by one part and consumed by another one. Deforestation optimises a functional program by transformation into a program which does not produce such intermediate data structures. In this thesis we present a new method for deforestation, which combines a known method,short cut deforestation, with a new analysis that is based on type inference. Short cut deforestation eliminates an intermediate list by a single, local transformation. In return, short cut deforestation expects both producer and consumer of the intermediate list in a certain form. Whereas the required form of the consumer is generally considered desirable in a well-structured program anyway, the required form of the producer is only a crutch to enable deforestation. Hence only the list-producing functions of the standard libraries were defined in the desired form and short cut deforestation has been confined to compositions of these functions. Here, we present an algorithm which transforms an arbitrary producer into the required form. Starting from the observation that short cut deforestation is based on a parametricity theorem of the second-order typed lambda-calculus, we show how the construction of the required form can be reduced to a type inference problem. Typability for the second-order typed lambda-calculus is undecidable, but we only need to solve a partial type inference problem. For this problem we develop an algorithm based on the well-known Hindley-Milner type inference algorithm. The transformation of a producer often requires inlining of some function definitions. Type inference even indicates which function definitions need to be inlined. However, only limited inlining across module boundaries is practically feasible. Therefore, we extend the previously developed algorithm to split a function definition into a worker definition and a wrapper definition. We only need to inline the small wrapper definition, which transfers all information required for deforestation. The flexibility of type inference allows us to remove intermediate lists which original short cut deforestation cannot remove, even with hand-crafted producers. In contrast to most previous work on deforestation, we give a detailed proof of completeness and semantic correctness of our transformation
    corecore