16 research outputs found

    An Analytical Approach to Programs as Data Objects

    Get PDF
    This essay accompanies a selection of 32 articles (referred to in bold face in the text and marginally marked in the bibliographic references) submitted to Aarhus University towards a Doctor Scientiarum degree in Computer Science.The author's previous academic degree, beyond a doctoral degree in June 1986, is an "Habilitation à diriger les recherches" from the Université Pierre et Marie Curie (Paris VI) in France; the corresponding material was submitted in September 1992 and the degree was obtained in January 1993.The present 32 articles have all been written since 1993 and while at DAIMI.Except for one other PhD student, all co-authors are or have been the author's students here in Aarhus

    Using SPIN to Analyse the Tree Identification Phase of the IEEE 1394 High-Performance Serial Bus(FireWire)Protocol

    Get PDF
    We describe how the tree identification phase of the IEEE 1394 high-performance serial bus (FireWire) protocol is modelled in Promela and verified using SPIN. The verification of arbitrary system configurations is discussed

    On Language Processors and Software Maintenance

    Get PDF
    This work investigates declarative transformation tools in the context of software maintenance. Besides maintenance of the language specification, evolution of a software language requires the adaptation of the software written in that language as well as the adaptation of the software that transforms software written in the evolving language. This co-evolution is studied to derive automatic adaptations of artefacts from adaptations of the language specification. Furthermore, AOP for Prolog is introduced to improve maintainability of language specifications and derived tools.Die Arbeit unterstĂŒtzt deklarative Transformationswerkzeuge im Kontext der Softwarewartung. Neben der Wartung der Sprachbeschreibung erfordert die Evolution einer Sprache sowohl die Anpassung der Software, die in dieser Sprache geschrieben ist als auch die Anpassung der Software, die diese Software transformiert. Diese Koevolution wird untersucht, um automatische Anpassungen von Artefakten von Anpassungen der Sprachbeschreibungen abzuleiten. Weiterhin wird AOP fĂŒr Prolog eingefĂŒhrt, um die Wartbarkeit von Sprachbeschreibungen und den daraus abgeleiteten Werkzeugen zu erhöhen

    Normalisation by Evaluation in the Compilation of Typed Functional Programming Languages

    Get PDF
    This thesis presents a critical analysis of normalisation by evaluation as a technique for speeding up compilation of typed functional programming languages. Our investigation focuses on the SML.NET compiler and its typed intermediate language MIL. We implement and measure the performance of normalisation by evaluation for MIL across a range of benchmarks. Taking a different approach, we also implement and measure the performance of a graph-based shrinking reductions algorithm for SML.NET. MIL is based on Moggi’s computational metalanguage. As a stepping stone to normalisation by evaluation, we investigate strong normalisation of the computational metalanguage by introducing an extension of Girard-Tait reducibility. Inspired by previous work on local state and parametric polymorphism, we define reducibility for continuations and more generally reducibility for frame stacks. First we prove strong normalistion for the computational metalanguage. Then we extend that proof to include features of MIL such as sums and exceptions. Taking an incremental approach, we construct a collection of increasingly sophisticated normalisation by evaluation algorithms, culminating in a range of normalisation algorithms for MIL. Congruence rules and alpha-rules are captured by a compositional parameterised semantics. Defunctionalisation is used to eliminate eta-rules. Normalisation by evaluation for the computational metalanguage is introduced using a monadic semantics. Variants in which the monadic effects are made explicit, using either state or control operators, are also considered. Previous implementations of normalisation by evaluation with sums have relied on continuation-passing-syle or control operators. We present a new algorithm which instead uses a single reference cell and a zipper structure. This suggests a possible alternative way of implementing Filinski’s monadic reflection operations. In order to obtain benchmark results without having to take into account all of the features of MIL, we implement two different techniques for eliding language constructs. The first is not semantics-preserving, but is effective for assessing the efficiency of normalisation by evaluation algorithms. The second is semantics-preserving, but less flexible. In common with many intermediate languages, but unlike the computational metalanguage, MIL requires all non-atomic values to be named. We use either control operators or state to ensure each non-atomic value is named. We assess our normalisation by evaluation algorithms by comparing them with a spectrum of progressively more optimised, rewriting-based normalisation algorithms. The SML.NET front-end is used to generate MIL code from ML programs, including the SML.NET compiler itself. Each algorithm is then applied to the generated MIL code. Normalisation by evaluation always performs faster than the most naıve algorithms— often by orders of magnitude. Some of the algorithms are slightly faster than normalisation by evaluation. Closer inspection reveals that these algorithms are in fact defunctionalised versions of normalisation by evaluation algorithms. Our normalisation by evaluation algorithms perform unrestricted inlining of functions. Unrestricted inlining can lead to a super-exponential blow-up in the size of target code with respect to the source. Furthermore, the worst-case complexity of compilation with unrestricted inlining is non-elementary in the size of the source code. SML.NET alleviates both problems by using a restricted form of normalisation based on Appel and Jim’s shrinking reductions. The original algorithm is quadratic in the worst case. Using a graph-based representation for terms we implement a compositional linear algorithm. This speeds up the time taken to perform shrinking reductions by up to a factor of fourteen, which leads to an improvement of up to forty percent in total compile time

    Sixth Biennial Report : August 2001 - May 2003

    No full text

    Scalable Logic Defined Static Analysis

    Get PDF
    Logic languages such as Datalog have been proposed as a method for specifying flexible and customisable static analysers. Using Datalog, various classes of static analyses can be expressed precisely and succinctly, requiring fewer lines of code than hand-crafted analysers. In this paradigm, a static analysis specification is encoded by a set of declarative logic rules and an o -the-shelf solver is used to compute the result of the static analysis. Unfortunately, when large-scale analyses are employed, Datalog-based tools currently fail to scale in comparison to hand-crafted static analysers. As a result, Datalog-based analysers have largely remained an academic curiosity, rather than industrially respectful tools. This thesis outlines our e orts in understanding the sources of performance limitations in Datalog-based tools. We propose a novel evaluation technique that is predicated on the fact that in the case of static analysis, the logical specification is a design time artefact and hence does not change during evaluation. Thus, instead of directly evaluating Datalog rules, our approach leverages partial evaluation to synthesise a specialised static analyser from these rules. This approach enables a novel indexing optimisations that automatically selects an optimal set of indexes to speedup and minimise memory usage in the Datalog computation. Lastly, we explore the case of more expressive logics, namely, constrained Horn clause and their use in proving the correctness of programs. We identify a bottleneck in various symbolic evaluation algorithms that centre around Craig interpolation. We propose a method of improving these evaluation algorithms by a proposing a method of guiding theorem provers to discover relevant interpolants with respect to the input logic specification. The culmination of our work is implemented in a general-purpose and highperformance tool called SoufflÂŽe. We describe SoufflÂŽe and evaluate its performance experimentally, showing significant improvement over alternative techniques and its scalability in real-world industrial use cases
    corecore