20 research outputs found

    Finite Countermodel Based Verification for Program Transformation (A Case Study)

    Get PDF
    Both automatic program verification and program transformation are based on program analysis. In the past decade a number of approaches using various automatic general-purpose program transformation techniques (partial deduction, specialization, supercompilation) for verification of unreachability properties of computing systems were introduced and demonstrated. On the other hand, the semantics based unfold-fold program transformation methods pose themselves diverse kinds of reachability tasks and try to solve them, aiming at improving the semantics tree of the program being transformed. That means some general-purpose verification methods may be used for strengthening program transformation techniques. This paper considers the question how finite countermodels for safety verification method might be used in Turchin's supercompilation method. We extract a number of supercompilation sub-algorithms trying to solve reachability problems and demonstrate use of an external countermodel finder for solving some of the problems.Comment: In Proceedings VPT 2015, arXiv:1512.0221

    FliPpr: A Prettier Invertible Printing System

    Get PDF
    When implementing a programming language, we often write a parser and a pretty-printer. However, manually writing both programs is not only tedious but also error-prone; it may happen that a pretty-printed result is not correctly parsed. In this paper, we propose FliPpr, which is a program transformation system that uses program inversion to produce a CFG parser from a pretty-printer. This novel approach has the advantages of fine-grained control over pretty-printing, and easy reuse of existing efficient pretty-printer and parser implementations

    Types and verification for infinite state systems

    Get PDF
    Server-like or non-terminating programs are central to modern computing. It is a common requirement for these programs that they always be available to produce a behaviour. One method of showing such availability is by endowing a type-theory with constraints that demonstrate that a program will always produce some behaviour or halt. Such a constraint is often called productivity. We introduce a type theory which can be used to type-check a polymorphic functional programming language similar to a fragment of the Haskell programming language. This allows placing constraints on program terms such that they will not type-check unless they are productive. We show that using program transformation techniques, one can restructure some programs which are not provably productive in our type theory into programs which are manifestly productive. This allows greater programmer flexibility in the specification of such programs. We have demonstrated a mechanisation of some of these important results in the proof-assistant Coq. We have also written a program transformation system for this term-language in the programming language Haskell

    Refactoring pattern matching

    Get PDF
    Defining functions by pattern matching over the arguments is advantageous for understanding and reasoning, but it tends to expose the implementation of a datatype. Significant effort has been invested in tackling this loss of modularity; however, decoupling patterns from concrete representations while maintaining soundness of reasoning has been a challenge. Inspired by the development of invertible programming, we propose an approach to program refactoring based on a right-invertible language rinv—every function has a right (or pre-) inverse. We show how this new design is able to permit a smooth incremental transition from programs with algebraic datatypes and pattern matching, to ones with proper encapsulation, while maintaining simple and sound reasoning

    Cheap deforestation for non-strict functional languages

    Get PDF
    In functional languages intermediate data structures are often used as glue to connect separate parts of a program together. Deforestation is the process of automatically removing intermediate data structures. In this thesis we present and analyse a new approach to deforestation. This new approach is both practical and general. We analyse in detail the problem of list removal rather than the more general problem of arbitrary data structure removal. This more limited scope allows a complete evaluation of the pragmatic aspects of using our deforestation technology. We have implemented our list deforestation algorithm in the Glasgow Haskell compiler. Our implementation has allowed practical feedback. One important conclusion is that a new analysis is required to infer function arities and the linearity of lambda abstractions. This analysis renders the basic deforestation algorithm far more effective. We give a detailed assessment of our implementation of deforestation. We measure the effectiveness of our deforestation on a suite of real application programs. We also observe the costs of our deforestation algorithm

    Catamorphism-based program transformations for non-strict functional languages

    Get PDF
    In functional languages intermediate data structures are often used as glue to connect separate parts of a program together. These intermediate data structures are useful because they allow modularity, but they are also a cause of inefficiency: each element need to be allocated, to be examined, and to be deallocated. Warm fusion is a program transformation technique which aims to eliminate intermediate data structures. Functions in a program are first transformed into the so called build-cata form, then fused via a one-step rewrite rule, the cata-build rule. In the process of the transformation to build-cata form we attempt to replace explicit recursion with a fixed pattern of recursion (catamorphism). We analyse in detail the problem of removing - possibly mutually recursive sets of - polynomial datatypes. Wehave implemented the warm fusion method in the Glasgow Haskell Compiler, which has allowed practical feedback. One important conclusion is that catamorphisms and fusion in general deserve a more prominent role in the compilation process. We give a detailed measurement of our implementation on a suite of real application programs

    Transformation of functional programs for identification of parallel skeletons

    Get PDF
    Hardware is becoming increasingly parallel. Thus, it is essential to identify and exploit inherent parallelism in a given program to effectively utilise the computing power available. However, parallel programming is tedious and error-prone when done by hand, and is very difficult for a compiler to do automatically to the desired level. One possible approach to parallel programming is to use transformation techniques to automatically identify and explicitly specify parallel computations in a given program using parallelisable algorithmic skeletons. Current existing methods for systematic derivation of parallel programs or parallel skeleton identification allow automation. However, they place constraints on the programs to which they are applicable, require manual derivation of operators with specific properties for parallel execution, or allow the use of inefficient intermediate data structures in the parallel programs. In this thesis, we present a program transformation method that addresses these issues and has the following attributes: (1) Reduces the number of inefficient data structures used in the parallel program; (2) Transforms a program into a form that is more suited to identifying parallel skeletons; (3) Automatically identifies skeletons that can be efficiently executed using their parallel implementations. Our transformation method does not place restrictions on the program to be parallelised, and allows automatic verification of skeleton operator properties to allow parallel execution. To evaluate the performance of our transformation method, we use a set of benchmark programs. The parallel version of each program produced by our method is compared with other versions of the program, including parallel versions that are derived by hand. Consequently, we have been able to evaluate the strengths and weaknesses of the proposed transformation method. The results demonstrate improvements in the efficiency of parallel programs produced in some examples, and also highlight the role of some intermediate data structures required for parallelisation in other examples
    corecore