5 research outputs found

    Profiling large-scale lazy functional programs

    Get PDF
    The LOLITA natural language processing system is an example of one of the ever increasing number of large-scale systems written entirely in a functional programming language. The system consists of over 50,000 lines of Haskell code and is able to perform a number of tasks such as semantic and pragmatic analysis of text, context scanning and query analysis. Such a system is more useful if the results are calculated in real-time, therefore the efficiency of such a system is paramount. For the past three years we have used profiling tools supplied with the Haskell compilers GHC and HBC to analyse and reason about our programming solutions and have achieved good results; however, our experience has shown that the profiling life-cycle is often too long to make a detailed analysis of a large system possible, and the profiling results are often misleading. A profiling system is developed which allows three types of functionality not previously found in a profiler for lazy functional programs. Firstly, the profiler is able to produce results based on an accurate method of cost inheritance. We have found that this reduces the possibility of the programmer obtaining misleading profiling results. Secondly, the programmer is able to explore the results after the execution of the program. This is done by selecting and deselecting parts of the program using a post-processor. This greatly reduces the analysis time as no further compilation, execution or profiling of the program is needed. Finally, the new profiling system allows the user to examine aspects of the run-time call structure of the program. This is useful in the analysis of the run-time behaviour of the program. Previous attempts at extending the results produced by a profiler in such a way have failed due to the exceptionally high overheads. Exploration of the overheads produced by the new profiling scheme show that typical overheads in profiling the LOLITA system are: a 10% increase in compilation time; a 7% increase in executable size and a 70% run-time overhead. These overheads mean a considerable saving in time in the detailed analysis of profiling a large, lazy functional program

    Lazy Image Processing: An Investigation into Applications of Lazy Functional Languages to Image Processing

    Get PDF
    The suitability of lazy functional languages for image processing applications is investigated by writing several image processing algorithms. The evaluation is done from an application programmer's point of view and the criteria include ease of writing and reading, and efficiency. Lazy functional languages are claimed to have the advantages that they are easy to write and read, as well as efficient. This is partly because these languages have mechanisms to improve modularity, such as higher-order functions. Also, they have the feature that no subexpression is evaluated until its value is required. Hence, unnecessary operations are automatically eliminated, and therefore programs can be executed efficiently. In image processing the amount of data handled is generally so large that much programming effort is typically spent in tasks such as managing memory and routine sequencing operations in order to improve efficiency. Therefore, lazy functional languages should be a good tool to write image processing applications. However, little practical or experimental evidence on this subject has been reported, since image processing has mostly been written in imperative languages. The discussion starts from the implementation of simple algorithms such as pointwise and local operations. It is shown that a large number of algorithms can be composed from a small number of higher-order functions. Then geometric transformations are implemented, for which lazy functional languages are considered to be particularly suitable. As for representations of images, lists and hierarchical data structures including binary trees and quadtrees are implemented. Through the discussion, it is demonstrated that the laziness of the languages improves modularity and efficiency. In particular, no pixel calculation is involved unless the user explicitly requests pixels, and consecutive transformations are straightforward and involve no quantisation errors. The other items discussed include: a method to combine pixel images and images expressed as continuous functions. Some benchmarks are also presented

    Fresh Techniques for Memory Profiling of Lazy Functional Programs

    Get PDF
    Lazy functional languages are known for their semantic elegance. They liberate programmers from many difficult responsibilities, such as the operational details of computations including memory management. However, the productivity and elegant semantics provided by lazy functional languages do not come without a cost. Lazy functional programs often suffer from unpredictable space leaks. For over two decades, various lazy functional implementations have been equipped with memory profiling tools. These tools furnish programmers with valuable information about space demands, but there is still scope for their future development. This dissertation presents two variants of memory profiling tools. The first tool is a hotspot heap profiler which presents information in two forms: profile charts and highlighted hotspots by source occurrence. The profile chart represents a hotspot-construction profile, distributed by hotspot temperatures. Hotspots are also marked in the textual display of source programs with the temperature they represent. Further information about hotspots is given in individual profiles. The second tool is a stack profiler which yields information about producers and construction of stack frames

    Execution Profiling for Non-Strict Functional Languages

    Get PDF
    Profiling tools, which measure and display the dynamic space and time behaviour of programs, are essential for identifying execution bottlenecks. A variety of such tools exist for conventional languages, but almost none for non-strict functional languages. There is a good reason for this: lazy evaluation means that the program is executed in an order which is not immediately apparent from the source code, so it is difficult to relate dynamically-gathered statistics back to the original source. This thesis examines the difficulties of profiling lazy higher-order functional languages and develops a profiling tool which overcomes them. It relates information about both the time and space requirements of the program back to the original source expressions identified by the programmer. Considerable attention is paid to the cost semantics with two abstract cost semantics, lexical scoping and evaluation scoping, being investigated. Experience gained from the two profiling schemes led to the development of a hybrid cost semantics. All three schemes are described and compared in a single formal framework. These abstract cost semantics are mapped onto an operational semantics and an implementation based on the STG-machine is developed. The manipulation of cost centres is made precise by extending the state-transition operational semantics of the STG-machine. The profiling tool has been incorporated into the Glasgow Haskell compiler ghc. Our approach preserves the correct cost attribution of costs while allowing program optimisation to proceed largely unhindered. So far as we know ghc is the only lazy functional language compiler to support source-level time profiling. The use of the profiler has lead to significant performance improvements in the compiler itself and other large application programs
    corecore