270 research outputs found
Program transformation for functional circuit descriptions
We model sequential synchronous circuits on the logical level by signal-processing programs in an extended lambda calculus Lpor with letrec, constructors, case and parallel or (por) employing contextual equivalence. The model describes gates as (parallel) boolean operators, memory using a delay, which in turn is modeled as a shift of the list of signals, and permits also constructive cycles due to the parallel or. It opens the possibility of a large set of program transformations that correctly transform the expressions and thus the represented circuits and provides basic tools for equivalence testing and optimizing circuits. A further application is the correct manipulation by transformations of software components combined with circuits. The main part of our work are proof methods for correct transformations of expressions in the lambda calculus Lpor, and to propose the appropriate program transformations
Recommended from our members
Duplo: A framework for OCaml post-link optimisation
We present a novel framework,
Duplo
, for the low-level post-link optimisation of OCaml programs, achieving a speedup of 7% and a reduction of at least 15% of the code size of widely-used OCaml applications. Unlike existing post-link optimisers, which typically operate on target-specific machine code, our framework operates on a Low-Level Intermediate Representation (LLIR) capable of representing both the OCaml programs and any C dependencies they invoke through the foreign-function interface (FFI). LLIR is analysed, transformed and lowered to machine code by our post-link optimiser, LLIR-OPT. Most importantly, LLIR allows the optimiser to cross the OCaml-C language boundary, mitigating the overhead incurred by the FFI and enabling analyses and transformations in a previously unavailable context. The optimised IR is then lowered to amd64 machine code through the existing target-specific code generator of LLVM, modified to handle garbage collection just as effectively as the native OCaml backend. We equip our optimiser with a suite of SSA-based transformations and points-to analyses capable of capturing the semantics and representing the memory models of both languages, along with a cross-language inliner to embed C methods into OCaml callers. We evaluate the gains of our framework, which can be attributed to both our optimiser and the more sophisticated amd64 backend of LLVM, on a wide-range of widely-used OCaml applications, as well as an existing suite of micro- and macro-benchmarks used to track the performance of the OCaml compiler.
EPSRC EP/P020011/1, Cambridge Trust
Making an Embedded DBMS JIT-friendly
While database management systems (DBMSs) are highly optimized, interactions
across the boundary between the programming language (PL) and the DBMS are
costly, even for in-process embedded DBMSs. In this paper, we show that
programs that interact with the popular embedded DBMS SQLite can be
significantly optimized - by a factor of 3.4 in our benchmarks - by inlining
across the PL / DBMS boundary. We achieved this speed-up by replacing parts of
SQLite's C interpreter with RPython code and composing the resulting
meta-tracing virtual machine (VM) - called SQPyte - with the PyPy VM. SQPyte
does not compromise stand-alone SQL performance and is 2.2% faster than SQLite
on the widely used TPC-H benchmark suite.Comment: 24 pages, 18 figure
The HERMIT in the Tree
This paper describes our experience using the HERMIT tool- kit to apply well-known transformations to the internal core language of the Glasgow Haskell Compiler. HERMIT provides several mechanisms to support writing general-purpose transformations: a domain-specific language for strategic programming specialized to GHC's core language, a library of primitive rewrites, and a shell-style{based scripting language for interactive and batch usage. There are many program transformation techniques that have been described in the literature but have not been mechanized and made available inside GHC - either because they are too specialized to include in a general-purpose compiler, or because the developers' interest is in theory rather than implementation. The mechanization process can often reveal pragmatic obstacles that are glossed over in pen-and-paper proofs; understanding and removing these obstacles is our concern. Using HERMIT, we implement eleven examples of three program transformations, report on our experience, and describe improvements made in the process
Profiling large-scale lazy functional programs
The LOLITA natural language processing system is an example of one of the ever increasing number of large-scale systems written entirely in a functional programming language. The system consists of over 50,000 lines of Haskell code and is able to perform a number of tasks such as semantic and pragmatic analysis of text, context scanning and query analysis. Such a system is more useful if the results are calculated in real-time, therefore the efficiency of such a system is paramount. For the past three years we have used profiling tools supplied with the Haskell compilers GHC and HBC to analyse and reason about our programming solutions and have achieved good results; however, our experience has shown that the profiling life-cycle is often too long to make a detailed analysis of a large system possible, and the profiling results are often misleading. A profiling system is developed which allows three types of functionality not previously found in a profiler for lazy functional programs. Firstly, the profiler is able to produce results based on an accurate method of cost inheritance. We have found that this reduces the possibility of the programmer obtaining misleading profiling results. Secondly, the programmer is able to explore the results after the execution of the program. This is done by selecting and deselecting parts of the program using a post-processor. This greatly reduces the analysis time as no further compilation, execution or profiling of the program is needed. Finally, the new profiling system allows the user to examine aspects of the run-time call structure of the program. This is useful in the analysis of the run-time behaviour of the program. Previous attempts at extending the results produced by a profiler in such a way have failed due to the exceptionally high overheads. Exploration of the overheads produced by the new profiling scheme show that typical overheads in profiling the LOLITA system are: a 10% increase in compilation time; a 7% increase in executable size and a 70% run-time overhead. These overheads mean a considerable saving in time in the detailed analysis of profiling a large, lazy functional program
The HERMIT in the machine: a plugin for the interactive transformation of GHC core language programs
The importance of reasoning about and refactoring programs is a central tenet of functional programming. Yet our compilers and development toolchains only provide rudimentary support for these tasks. This paper introduces a programmatic and compiler-centric interface that facilitates refactoring and equational reasoning. To develop our ideas, we have implemented HERMIT, a toolkit enabling informal but systematic transformation of Haskell programs from inside the Glasgow Haskell Compiler’s optimization pipeline. With HERMIT, users can experiment with optimizations and equational reasoning, while the tedious heavy lifting of performing the actual transformations is done for them. HERMIT provides a transformation API that can be used to build higher-level rewrite tools. One use-case is prototyping new optimizations as clients of this API before being committed to the GHC toolchain. We describe a HERMIT application - a read-eval-print shell for performing transformations using HERMIT. We also demonstrate using this shell to prototype an optimization on a specific example, and report our initial experiences and remaining challenges
Program representation size in an intermediate language with intersection and union types
The CIL compiler for core Standard ML compiles whole programs using a novel typed intermediate language (TIL) with intersection and union types and flow labels on both terms and types. The CIL term representation duplicates portions of the program where intersection types are introduced and union types are eliminated. This duplication makes it easier to represent type information and to introduce customized data representations. However, duplication incurs compile-time space costs that are potentially much greater than are incurred in TILs employing type-level abstraction or quantification. In this paper, we present empirical data on the compile-time space costs of using CIL as an intermediate language. The data shows that these costs can be made tractable by using sufficiently fine-grained flow analyses together with standard hash-consing techniques. The data also suggests that non-duplicating formulations of intersection (and union) types would not achieve significantly better space complexity.National Science Foundation (CCR-9417382, CISE/CCR ESS 9806747); Sun grant (EDUD-7826-990410-US); Faculty Fellowship of the Carroll School of Management, Boston College; U.K. Engineering and Physical Sciences Research Council (GR/L 36963, GR/L 15685
- …