1,658 research outputs found
Liveness-Based Garbage Collection for Lazy Languages
We consider the problem of reducing the memory required to run lazy
first-order functional programs. Our approach is to analyze programs for
liveness of heap-allocated data. The result of the analysis is used to preserve
only live data---a subset of reachable data---during garbage collection. The
result is an increase in the garbage reclaimed and a reduction in the peak
memory requirement of programs. While this technique has already been shown to
yield benefits for eager first-order languages, the lack of a statically
determinable execution order and the presence of closures pose new challenges
for lazy languages. These require changes both in the liveness analysis itself
and in the design of the garbage collector.
To show the effectiveness of our method, we implemented a copying collector
that uses the results of the liveness analysis to preserve live objects, both
evaluated (i.e., in WHNF) and closures. Our experiments confirm that for
programs running with a liveness-based garbage collector, there is a
significant decrease in peak memory requirements. In addition, a sizable
reduction in the number of collections ensures that in spite of using a more
complex garbage collector, the execution times of programs running with
liveness and reachability-based collectors remain comparable
An Environment for Analyzing Space Optimizations in Call-by-Need Functional Languages
We present an implementation of an interpreter LRPi for the call-by-need
calculus LRP, based on a variant of Sestoft's abstract machine Mark 1, extended
with an eager garbage collector. It is used as a tool for exact space usage
analyses as a support for our investigations into space improvements of
call-by-need calculi.Comment: In Proceedings WPTE 2016, arXiv:1701.0023
Compile-Time Optimisation of Store Usage in Lazy Functional Programs
Functional languages offer a number of advantages over their imperative counterparts. However,
a substantial amount of the time spent on processing functional programs is due to
the large amount of storage management which must be performed. Two apparent reasons
for this are that the programmer is prevented from including explicit storage management
operations in programs which have a purely functional semantics, and that more readable
programs are often far from optimal in their use of storage. Correspondingly, two alternative
approaches to the optimisation of store usage at compile-time are presented in this thesis.
The first approach is called compile-time garbage collection. This approach involves determining
at compile-time which cells are no longer required for the evaluation of a program,
and making these cells available for further use. This overcomes the problem of a programmer
not being able to indicate explicitly that a store cell can be made available for further use.
Three different methods for performing compile-time garbage collection are presented in this
thesis; compile-time garbage marking, explicit deallocation and destructive allocation. Of
these three methods, it is found that destructive allocation is the only method which is of
practical use.
The second approach to the optimisation of store usage is called compile-time garbage
avoidance. This approach involves transforming programs into semantically equivalent programs
which produce less garbage at compile-time. This attempts to overcome the problem
of more readable programs being far from optimal in their use of storage. In this thesis, it is
shown how to guarantee that the process of compile-time garbage avoidance will terminate.
Both of the described approaches to the optimisation of store usage make use of the
information obtained by usage counting analysis. This involves counting the number of times
each value in a program is used. In this thesis, a reference semantics is defined against which
the correctness of usage counting analyses can be proved. A usage counting analysis is then
defined and proved to be correct with respect to this reference semantics. The information
obtained by this analysis is used to annotate programs for compile-time garbage collection,
and to guide the transformation when compile-time garbage avoidance is performed.
It is found that compile-time garbage avoidance produces greater increases in efficiency
than compile-time garbage collection, but much of the garbage which can be collected by
compile-time garbage collection cannot be avoided at compile-time. The two approaches are
therefore complementary, and the expressions resulting from compile-time garbage avoidance
transformations can be annotated for compile-time garbage collection to further optimise the
use of storage
Modelling Garbage Collection Algorithms --- Extend abstract
We show how abstract requirements of garbage collection can be captured using temporal logic. The temporal logic specification can then be used as a basis for process algebra specifications which can involve varying amounts of parallelism. We present two simple CCS specifications as an example, followed by a more complex specification of the cyclic reference counting algorithm. The verification of such algorithms is then briefly discussed
Benchmarking Implementations of Functional Languages with ``Pseudoknot'', a Float-Intensive Benchmark
Over 25 implementations of different functional languages are benchmarked using the same program, a floatingpoint intensive application taken from molecular biology. The principal aspects studied are compile time and execution time for the various implementations that were benchmarked. An important consideration is how the program can be modified and tuned to obtain maximal performance on each language implementation.\ud
With few exceptions, the compilers take a significant amount of time to compile this program, though most compilers were faster than the then current GNU C compiler (GCC version 2.5.8). Compilers that generate C or Lisp are often slower than those that generate native code directly: the cost of compiling the intermediate form is normally a large fraction of the total compilation time.\ud
There is no clear distinction between the runtime performance of eager and lazy implementations when appropriate annotations are used: lazy implementations have clearly come of age when it comes to implementing largely strict applications, such as the Pseudoknot program. The speed of C can be approached by some implemtations, but to achieve this performance, special measures such as strictness annotations are required by non-strict implementations.\ud
The benchmark results have to be interpreted with care. Firstly, a benchmark based on a single program cannot cover a wide spectrum of 'typical' applications.j Secondly, the compilers vary in the kind and level of optimisations offered, so the effort required to obtain an optimal version of the program is similarly varied
- …