15 research outputs found

    Postcondition-preserving fusion of postorder tree transformations

    Get PDF
    Tree transformations are commonly used in applications such as program rewriting in compilers. Using a series of simple transformations to build a more complex system can make the resulting software easier to understand, maintain, and reason about. Fusion strategies for combining such successive tree transformations promote this modularity, whilst mitigating the performance impact from increased numbers of tree traversals. However, it is important to ensure that fused transformations still perform their intended tasks. Existing approaches to fusing tree transformations tend to take an informal approach to soundness, or be too restrictive to consider the kind of transformations needed in a compiler. We use postconditions to define a more useful formal notion of successful fusion, namely postcondition-preserving fusion. We also present criteria that are sufficient to ensure postcondition-preservation and facilitate modular reasoning about the success of fusion

    Flowchart Programs, Regular Expressions, and Decidability of Polynomial Growth-Rate

    Full text link
    We present a new method for inferring complexity properties for a class of programs in the form of flowcharts annotated with loop information. Specifically, our method can (soundly and completely) decide if computed values are polynomially bounded as a function of the input; and similarly for the running time. Such complexity properties are undecidable for a Turing-complete programming language, and a common work-around in program analysis is to settle for sound but incomplete solutions. In contrast, we consider a class of programs that is Turing-incomplete, but strong enough to include several challenges for this kind of analysis. For a related language that has well-structured syntax, similar to Meyer and Ritchie's LOOP programs, the problem has been previously proved to be decidable. The analysis relied on the compositionality of programs, hence the challenge in obtaining similar results for flowchart programs with arbitrary control-flow graphs. Our answer to the challenge is twofold: first, we propose a class of loop-annotated flowcharts, which is more general than the class of flowcharts that directly represent structured programs; secondly, we present a technique to reuse the ideas from the work on tructured programs and apply them to such flowcharts. The technique is inspired by the classic translation of non-deterministic automata to regular expressions, but we obviate the exponential cost of constructing such an expression, obtaining a polynomial-time analysis. These ideas may well be applicable to other analysis problems.Comment: In Proceedings VPT 2016, arXiv:1607.0183

    Eliminating intermediate lists in pH using local transformations

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.Includes bibliographical references (p. 68-69).by Jan-Willem Maessen.M.Eng

    Towards unifying partial evaluation, deforestation, supercompilation, and GPC

    Full text link

    Hfusion : a fusion tool based on acid rain plus extensions

    Get PDF
    When constructing programs, it is a usual practice to compose algorithms that solve simpler problems to solve a more complex one. This principle adapts so well to software development because it provides a structure to understand, design, reuse and test programs. In functional languages, algorithms are usually connected through the use of intermediate data structures, which carry the data from one algorithm to another one. The data structures impose a load on the algorithms to allocate, traverse and deallocate them. To alleviate this ine ciency, automatic program transformations have been studied, which produce equivalent programs that make less use of intermediate data structures. We present a set of automatic program transformation techniques based on algebraic laws known as Acid Rain. These techniques allow to remove intermediate data structures in programs containing primitive recursive functions, mutually recursive functions and functions with multiple recursive arguments. We also provide an experimental implementation of our techniques which allows their application on user supplied programs

    Declassification: transforming java programs to remove intermediate classes

    Get PDF
    Computer applications are increasingly being written in object-oriented languages like Java and C++ Object-onented programming encourages the use of small methods and classes. However, this style of programming introduces much overhead as each method call results in a dynamic dispatch and each field access becomes a pointer dereference to the heap allocated object. Many of the classes in these programs are included to provide structure rather than to act as reusable code, and can therefore be regarded as intermediate. We have therefore developed an optimisation technique, called declassification, which will transform Java programs into equivalent programs from which these intermediate classes have been removed. The optimisation technique developed involves two phases, analysis and transformation. The analysis involves the identification of intermediate classes for removal. A suitable class is defined to be a class which is used exactly once within a program. Such classes are identified by this analysis The subsequent transformation involves eliminating these intermediate classes from the program. This involves inlinmg the fields and methods of each intermediate class within the enclosing class which uses it. In theory, declassification reduces the number of classes which are instantiated and used in a program during its execution. This should reduce the overhead of object creation and maintenance as child objects are no longer created, and it should also reduce the number of field accesses and dynamic dispatches required by a program to execute. An important feature of the declassification technique, as opposed to other similar techniques, is that it guarantees there will be no increase in code size. An empirical study was conducted on a number of reasonable-sized Java programs and it was found that very few suitable classes were identified for miming. The results showed that the declassification technique had a small influence on the memory consumption and a negligible influence on the run-time performance of these programs. It is therefore concluded that the declassification technique was not successful in optimizing the test programs but further extensions to this technique combined with an intrinsically object-onented set of test programs could greatly improve its success

    Compile-Time Optimisation of Store Usage in Lazy Functional Programs

    Get PDF
    Functional languages offer a number of advantages over their imperative counterparts. However, a substantial amount of the time spent on processing functional programs is due to the large amount of storage management which must be performed. Two apparent reasons for this are that the programmer is prevented from including explicit storage management operations in programs which have a purely functional semantics, and that more readable programs are often far from optimal in their use of storage. Correspondingly, two alternative approaches to the optimisation of store usage at compile-time are presented in this thesis. The first approach is called compile-time garbage collection. This approach involves determining at compile-time which cells are no longer required for the evaluation of a program, and making these cells available for further use. This overcomes the problem of a programmer not being able to indicate explicitly that a store cell can be made available for further use. Three different methods for performing compile-time garbage collection are presented in this thesis; compile-time garbage marking, explicit deallocation and destructive allocation. Of these three methods, it is found that destructive allocation is the only method which is of practical use. The second approach to the optimisation of store usage is called compile-time garbage avoidance. This approach involves transforming programs into semantically equivalent programs which produce less garbage at compile-time. This attempts to overcome the problem of more readable programs being far from optimal in their use of storage. In this thesis, it is shown how to guarantee that the process of compile-time garbage avoidance will terminate. Both of the described approaches to the optimisation of store usage make use of the information obtained by usage counting analysis. This involves counting the number of times each value in a program is used. In this thesis, a reference semantics is defined against which the correctness of usage counting analyses can be proved. A usage counting analysis is then defined and proved to be correct with respect to this reference semantics. The information obtained by this analysis is used to annotate programs for compile-time garbage collection, and to guide the transformation when compile-time garbage avoidance is performed. It is found that compile-time garbage avoidance produces greater increases in efficiency than compile-time garbage collection, but much of the garbage which can be collected by compile-time garbage collection cannot be avoided at compile-time. The two approaches are therefore complementary, and the expressions resulting from compile-time garbage avoidance transformations can be annotated for compile-time garbage collection to further optimise the use of storage

    Algebraic fusion of functions with an accumulating parameter and its improvement

    Full text link
    corecore