51 research outputs found

    Benchmarking implementations of functional languages with ‘Pseudoknot', a float-intensive benchmark

    Get PDF
    Over 25 implementations of different functional languages are benchmarked using the same program, a floating-point intensive application taken from molecular biology. The principal aspects studied are compile time and execution time for the various implementations that were benchmarked. An important consideration is how the program can be modified and tuned to obtain maximal performance on each language implementation. With few exceptions, the compilers take a significant amount of time to compile this program, though most compilers were faster than the then current GNU C compiler (GCC version 2.5.8). Compilers that generate C or Lisp are often slower than those that generate native code directly: the cost of compiling the intermediate form is normally a large fraction of the total compilation time. There is no clear distinction between the runtime performance of eager and lazy implementations when appropriate annotations are used: lazy implementations have clearly come of age when it comes to implementing largely strict applications, such as the Pseudoknot program. The speed of C can be approached by some implementations, but to achieve this performance, special measures such as strictness annotations are required by non-strict implementations. The benchmark results have to be interpreted with care. Firstly, a benchmark based on a single program cannot cover a wide spectrum of ‘typical' applications. Secondly, the compilers vary in the kind and level of optimisations offered, so the effort required to obtain an optimal version of the program is similarly varie

    Cayenne -- a Language With Dependent Types

    No full text
    Cayenne is a Haskell-like language. The main difference between Haskell and Cayenne is that Cayenne has dependent types, i.e., the result type of a function may depend on the argument value, and types of record components (which can be types or values) may depend on other components. Cayenne also combines the syntactic categories for value expressions and type expressions; thus reducing the number of language concepts. Having dependent types and combined type and value expressions makes the language very powerful. It is powerful enough that a special module concept is unnecessary; ordinary records suffice. It is also powerful enough to encode predicate logic at the type level, allowing types to be used as specifications of programs. However, this power comes at a cost: type checking of Cayenne is undecidable. While this may appear to be a steep price to pay, it seems to work well in practice

    Partial Evaluation in Aircraft Crew Planning

    No full text
    In this paper we investigate how partial evaluation and program transformations can be used on a real problem, namely that of speeding up airline crew scheduling. Scheduling of crew is subject to many rules and restrictions. These restrictions are expressed in a rule language. However, in a given planning situation much is known to be fixed, so the rule set can be partially evaluated wit respect to this known input. The approach is somewhat novel in that it uses truly static input data as well as static input data where the values are known only to belong to a set of values. The results of the partial evaluation is quite satisfactory: both compilation and running times have decreased by using it. The partial evaluator is now part of the crew scheduling system that Carmen Systems AB markets and which is in use at most of the major European airlines and in daily production. Keywords: Partial evaluation, program transformation, generalized constant propagation, airline crew scheduling. ..

    Implementing Haskell overloading

    No full text
    Haskell overloading poses new challenges for compiler writers. Until recently there have been no implementations of it which have had acceptable performance; users have been adviced to avoid it by using explicit type signatures. This is unfortunate since it does not promote the reusability of software components that overloading really offers. In this paper we describe a number of ways to improve the speed of Haskell overloading. None of the techniques described here is particularly exciting or complicated, but taken together they may give an order of magnitude speedup for some programs using overloading. The techniques fall into two categories: speeding up overloading, and avoiding overloading altogether. For the second kind we borrow some techniques from partial evaluation. There does not seem to be a single implementation technique which is a panacea; but a number of different ones have to be put together to get decent performance. 1 Introduction Haskell, [Hud92], introduces a new ..

    Haskell B. user's manual - Version 0.999.4

    No full text
    this memory is used during exection to lower the working set of the program. How large part is determined after each garbage collection. The amount is used (i.e., available for allocation) is the amount that was copied when the collection occured multiplied by 4. In this way the working set is adapted to the amount of heap that is actually in use

    BWM a concrete machine for graph reduction

    No full text
    This paper describes a computer architecture for execution of lazy functional languages. The architecture is based on graph reduction of the -calculus, but is extended to handle real programs. It is not another abstract machine, but instead a proposal for how actual hardware could be designed. The machine uses very large memory words. This makes it possible for a single instruction to do a lot (akin to VLIW and superscalar machines), and also to construct and scrutinize large objects with few memory operations. Since construction of suspensions is a very common operation during graph reduction this is beneficial. The machine is built around a stack and a multiplexor, not around an arithmetic unit as most stock processors. The reason for this is that this machine is not aimed at number crunching, but at manipulating data. As with modern RISC processors the interaction between the compiler and the processor is crucial; the "hardware" has several shortcomings that the compiler has to know..
    • …
    corecore