104 research outputs found

    Lazy Image Processing: An Investigation into Applications of Lazy Functional Languages to Image Processing

    Get PDF
    The suitability of lazy functional languages for image processing applications is investigated by writing several image processing algorithms. The evaluation is done from an application programmer's point of view and the criteria include ease of writing and reading, and efficiency. Lazy functional languages are claimed to have the advantages that they are easy to write and read, as well as efficient. This is partly because these languages have mechanisms to improve modularity, such as higher-order functions. Also, they have the feature that no subexpression is evaluated until its value is required. Hence, unnecessary operations are automatically eliminated, and therefore programs can be executed efficiently. In image processing the amount of data handled is generally so large that much programming effort is typically spent in tasks such as managing memory and routine sequencing operations in order to improve efficiency. Therefore, lazy functional languages should be a good tool to write image processing applications. However, little practical or experimental evidence on this subject has been reported, since image processing has mostly been written in imperative languages. The discussion starts from the implementation of simple algorithms such as pointwise and local operations. It is shown that a large number of algorithms can be composed from a small number of higher-order functions. Then geometric transformations are implemented, for which lazy functional languages are considered to be particularly suitable. As for representations of images, lists and hierarchical data structures including binary trees and quadtrees are implemented. Through the discussion, it is demonstrated that the laziness of the languages improves modularity and efficiency. In particular, no pixel calculation is involved unless the user explicitly requests pixels, and consecutive transformations are straightforward and involve no quantisation errors. The other items discussed include: a method to combine pixel images and images expressed as continuous functions. Some benchmarks are also presented

    A Functional Correspondence between Call-by-Need Evaluators and Lazy Abstract Machines

    Full text link

    Profiling of parallel programs in a non-strict functional language

    Get PDF
    [Abstract] Purely functional programming languages offer many benefits to parallel programming. The absence of side effects and the provision for higher-level abstractions eases the programming effort. In particular, nonstrict functional languages allow further separation of concerns and provide more parallel facilities in the form of semi-implicit parallelism. On the other hand, because the low-level details of the execution are hidden, usually in a runtime system, the process of debugging the performance of parallel applications becomes harder. Currently available parallel profiling tools allow programmers to obtain some information about the execution; however, this information is usually not detailed enough to precisely pinpoint the cause of some performance problems. Often, this is because the cost of obtaining that information would be prohibitive for a complete program execution. In this thesis, we design and implement a parallel profiling framework based on execution replay. This debugging technique makes it possible to simulate recorded executions of a program, ensuring that their behaviour remains unchanged. The novelty of our approach is to adapt this technique to the context of parallel profiling and to take advantage of the characteristics of non-strict purely functional semantics to guarantee minimal overhead in the recording process. Our work allows to build more powerful profiling tools that do not affect the parallel behaviour of the program in a meaningful way.We demonstrate our claims through a series of benchmarks and the study of two use cases.[Resumo] As linguaxes de programación funcional puras ofrecen moitos beneficios para a programación paralela. A ausencia de efectos secundarios e as abstraccións de alto nivel proporcionadas facilitan o esforzo de programación. En particular, as linguaxes de programación non estritas permiten unha maior separación de conceptos e proporcionan máis capacidades de paralelismo na forma de paralelismo semi-implícito. Por outra parte, debido a que os detalles de baixo nivel da execución están ocultos, xeralmente nun sistema de execución, o proceso de depuración do rendemento de aplicacións paralelas é máis difícil. As ferramentas de profiling dispoñibles hoxe en día permiten aos programadores obter certa información acerca da execución; non obstante, esta información non acostuma a ser o suficientemente detallada para determinar de maneira precisa a causa dalgúns problemas de rendemento. A miúdo, isto débese a que o custe de obter esa información sería prohibitivo para unha execución completa do programa. Nesta tese, deseñamos e implementamos unha plataforma de profiling paralelo baseada en execution replay. Esta técnica de depuración fai que sexa posible simular execucións previamente rexistradas, asegurando que o seu comportamento se manteña sen cambios. A novidade do noso enfoque é adaptar esta técnica para o contexto do profiling paralelo e aproveitar as características da semántica das linguaxes de programación funcional non estritas e puras para garantizar unha sobrecarga mínima na recolección das trazas de execución. O noso traballo permite construír ferramentas de profiling máis potentes que non afectan ao comportamento paralelo do programa de maneira significativa. Demostramos as nosas afirmacións nunha serie de benchmarks e no estudo de dous casos de uso.[Resumen]Los lenguajes de programación funcional puros ofrecen muchos beneficios para la programación paralela. La ausencia de efectos secundarios y las abstracciones de alto nivel proporcionadas facilitan el esfuerzo de programación. En particular, los lenguajes de programación no estrictos permiten una mayor separación de conceptos y proporcionan más capacidades de paralelismo en la forma de paralelismo semi-implícito. Por otra parte, debido a que los detalles de bajo nivel de la ejecución están ocultos, generalmente en un sistema de ejecución, el proceso de depuración del rendimiento de aplicaciones paralelas es más difícil. Las herramientas de profiling disponibles hoy en día permiten a los programadores obtener cierta información acerca de la ejecución; sin embargo, esta información no suele ser lo suficientemente detallada para determinar de manera precisa la causa de algunos problemas de rendimiento. A menudo, esto se debe a que el costo de obtener esa información sería prohibitivo para una ejecución completa del programa. En esta tesis, diseñamos e implementamos una plataforma de profiling paralelo baseada en execution replay. Esta técnica de depuración hace que sea posible simular ejecuciones previamente registradas, asegurando que su comportamiento se mantiene sin cambios. La novedad de nuestro enfoque es adaptar esta técnica para el contexto del profiling paralelo y aprovechar las características de la semántica de los lenguajes de programación funcional no estrictos y puros para garantizar una sobrecarga mínima en la recolección de las trazas de ejecución. Nuestro trabajo permite construir herramientas de profiling más potentes que no afectan el comportamiento paralelo del programa de manera significativa. Demostramos nuestras afirmaciones en una serie de benchmarks y en el estudio de dos casos de uso

    Projection-Based Program Analysis

    Get PDF
    Projection-based program analysis techniques are remarkable for their ability to give highly detailed and useful information not obtainable by other methods. The first proposed projection-based analysis techniques were those of Wadler and Hughes for strictness analysis, and Launchbury for binding-time analysis; both techniques are restricted to analysis of first-order monomorphic languages. Hughes and Launchbury generalised the strictness analysis technique, and Launchbury the binding-time analysis technique, to handle polymorphic languages, again restricted to first order. Other than a general approach to higher-order analysis suggested by Hughes, and an ad hoc implementation of higher-order binding-time analysis by Mogensen, neither of which had any formal notion of correctness, there has been no successful generalisation to higher-order analysis. We present a complete redevelopment of monomorphic projection-based program analysis from first principles, starting by considering the analysis of functions (rather than programs) to establish bounds on the intrinsic power of projection-based analysis, showing also that projection-based analysis can capture interesting termination properties. The development of program analysis proceeds in two distinct steps: first for first-order, then higher order. Throughout we maintain a rigorous notion of correctness and prove that our techniques satisfy their correctness conditions. Our higher-order strictness analysis technique is able to capture various so-called data-structure-strictness properties such as head strictness-the fact that a function may be safely assumed to evaluate the head of every cons cell in a list for which it evaluates the cons cell. Our technique, and Hunt's PER-based technique (originally proposed at about the same time as ours), are the first techniques of any kind to capture such properties at higher order. Both the first-order and higher-order techniques are the first projection-based techniques to capture joint strictness properties-for example, the fact that a function may be safely assumed to evaluate at least one of several arguments. The first-order binding-time analysis technique is essentially the same as Launchbury's; the higher-order technique is the first such formally-based higher-order generalisation. Ours are the first projection-based termination analysis techniques, and are the first techniques of any kind that are able to detect termination properties such as head termination-the fact that termination of a cons cell implies termination of the head. A notable feature of the development is the method by which the first-order analysis semantics are generalised to higher-order: except for the fixed-point constant the higher-order semantics are all instances of a higher-order semantics parameterised by the constants defining the various first-order semantics

    Profiling large-scale lazy functional programs

    Get PDF
    The LOLITA natural language processing system is an example of one of the ever increasing number of large-scale systems written entirely in a functional programming language. The system consists of over 50,000 lines of Haskell code and is able to perform a number of tasks such as semantic and pragmatic analysis of text, context scanning and query analysis. Such a system is more useful if the results are calculated in real-time, therefore the efficiency of such a system is paramount. For the past three years we have used profiling tools supplied with the Haskell compilers GHC and HBC to analyse and reason about our programming solutions and have achieved good results; however, our experience has shown that the profiling life-cycle is often too long to make a detailed analysis of a large system possible, and the profiling results are often misleading. A profiling system is developed which allows three types of functionality not previously found in a profiler for lazy functional programs. Firstly, the profiler is able to produce results based on an accurate method of cost inheritance. We have found that this reduces the possibility of the programmer obtaining misleading profiling results. Secondly, the programmer is able to explore the results after the execution of the program. This is done by selecting and deselecting parts of the program using a post-processor. This greatly reduces the analysis time as no further compilation, execution or profiling of the program is needed. Finally, the new profiling system allows the user to examine aspects of the run-time call structure of the program. This is useful in the analysis of the run-time behaviour of the program. Previous attempts at extending the results produced by a profiler in such a way have failed due to the exceptionally high overheads. Exploration of the overheads produced by the new profiling scheme show that typical overheads in profiling the LOLITA system are: a 10% increase in compilation time; a 7% increase in executable size and a 70% run-time overhead. These overheads mean a considerable saving in time in the detailed analysis of profiling a large, lazy functional program

    A Unifying Approach to Goal-Directed Evaluation

    Full text link

    Extracting total Amb programs from proofs

    Get PDF
    We present a logical system CFP (Concurrent Fixed Point Logic) supporting the extraction of nondeterministic and concurrent programs that are provably total and correct. CFP is an intuitionistic first-order logic with inductive and coinductive definitions extended by two propositional operators: Rrestriction, a strengthening of implication, and an operator for total concurrency. The source of the extraction are formal CFP proofs, the target is a lambda calculus with constructors and recursion extended by a constructor Amb (for McCarthy's amb) which is interpreted operationally as globally angelic choice and is used to implement nondeterminism and concurrency. The correctness of extracted programs is proven via an intermediate domain-theoretic denotational semantics. We demonstrate the usefulness of our system by extracting a nondeterministic program that translates infinite Gray code into the signed digit representation. A noteworthy feature of CFP is the fact that the proof rules for restriction and concurrency involve variants of the classical law of excluded middle that would not be interpretable computationally without Amb.Comment: 39 pages + 4 pages appendix. arXiv admin note: text overlap with arXiv:2104.1466

    Finding parallel functional pearls : automatic parallel recursion scheme detection in Haskell functions via anti-unification

    Get PDF
    This work has been partially supported by the EU H2020 grant “RePhrase: Refactoring Parallel Heterogeneous Resource-Aware Applications–a Software Engineering Approach” (ICT-644235), by COST Action IC1202 (TACLe), supported by COST (European Cooperation in Science and Technology) , by EPSRC grant “Discovery: Pattern Discovery and Program Shaping for Manycore Systems” (EP/P020631/1), and by Scottish Enterprise PS7305CA44.This paper describes a new technique for identifying potentially parallelisable code structures in functional programs. Higher-order functions enable simple and easily understood abstractions that can be used to implement a variety of common recursion schemes, such as maps and folds over traversable data structures. Many of these recursion schemes have natural parallel implementations in the form of algorithmic skeletons. This paper presents a technique that detects instances of potentially parallelisable recursion schemes in Haskell 98 functions. Unusually, we exploit anti-unification to expose these recursion schemes from source-level definitions whose structures match a recursion scheme, but which are not necessarily written directly in terms of maps, folds, etc. This allows us to automatically introduce parallelism, without requiring the programmer to structure their code a priori in terms of specific higher-order functions. We have implemented our approach in the Haskell refactoring tool, HaRe, and demonstrated its use on a range of common benchmarking examples. Using our technique, we show that recursion schemes can be easily detected, that parallel implementations can be easily introduced, and that we can achieve real parallel speedups (up to 23 . 79 × the sequential performance on 28 physical cores, or 32 . 93 × the sequential performance with hyper-threading enabled).PostprintPeer reviewe

    Prototyping parallel functional intermediate languages

    Get PDF
    Non-strict higher-order functional programming languages are elegant, concise, mathematically sound and contain few environment-specific features, making them obvious candidates for harnessing high-performance architectures. The validity of this approach has been established by a number of experimental compilers. However, while there have been a number of important theoretical developments in the field of parallel functional programming, implementations have been slow to materialise. The myriad design choices and demands of specific architectures lead to protracted development times. Furthermore, the resulting systems tend to be monolithic entities, and are difficult to extend and test, ultimatly discouraging experimentation. The traditional solution to this problem is the use of a rapid prototyping framework. However, as each existing systems tends to prefer one specific platform and a particular way of expressing parallelism (including implicit specification) it is difficult to envisage a general purpose framework. Fortunately, most of these systems have at least one point of commonality: the use of an intermediate form. Typically, these abstract representations explicitly identify all parallel components but without the background noise of syntactic and (potentially arbitrary) implementation details. To this end, this thesis outlines a framework for rapidly prototyping such intermediate languages. Based on the traditional three-phase compiler model, the design process is driven by the development of various semantic descriptions of the language. Executable versions of the specifications help to both debug and informally validate these models. A number of case studies, covering the spectrum of modern implementations, demonstrate the utility of the framework

    The Europeanisation of local government in Western Scotland, 1975-1997

    Get PDF
    This thesis considers the impact of Europeanisation upon local government in the West of Scotland through analysing firstly, the response of Strathclyde Regional Council to Europeanising influences and the policies which SRC subsequently pursued to respond to these developments. The impact of European policies as a form of multi-level governance is also evaluated through research into the role of local government within two institutions established in Strathclyde to deal with aspects of European policy :- Strathclyde European Partnership and the Ouverture programme. Lastly, the impact of local government reorganisation upon SRC's successor unitary authorities to engage with European policy is considered. The research findings illustrate that the process of Europeanisation has developed through a number of cyclical stages which has resulted in the development of changing and varying response from SNAs to European policy developments. The initial engagement of SRC with European Institutions occurred at an early stage as the Council attempted to discover new sources of finance. The pro-active stance of the Council resulted in financial benefits for Strathclyde but also an increasing engagement within SRC with European policy as the Council responded to the emerging Single European Market. This engagement also led SRC to attempt to utilise a variety of means to influence European policy. The research suggests that while local government was able to influence the European policy process this tended to occur where European Commission and / or member-state(s) interests overlapped with those of local government. While multi-level governance exists in Western Scotland, the key partners remain the European Commission and the member state
    corecore