6 research outputs found

    Abstract multiple specialization and its application to program parallelization.

    Get PDF
    Program specialization optimizes programs for known valĂșes of the input. It is often the case that the set of possible input valĂșes is unknown, or this set is infinite. However, a form of specialization can still be performed in such cases by means of abstract interpretation, specialization then being with respect to abstract valĂșes (substitutions), rather than concrete ones. We study the mĂșltiple specialization of logic programs based on abstract interpretation. This involves in principie, and based on information from global analysis, generating several versions of a program predicate for different uses of such predicate, optimizing these versions, and, finally, producing a new, "multiply specialized" program. While mĂșltiple specialization has received theoretical attention, little previous evidence exists on its practicality. In this paper we report on the incorporation of mĂșltiple specialization in a parallelizing compiler and quantify its effects. A novel approach to the design and implementation of the specialization system is proposed. The resulting implementation techniques result in identical specializations to those of the best previously proposed techniques but require little or no modification of some existing abstract interpreters. Our results show that, using the proposed techniques, the resulting "abstract mĂșltiple specialization" is indeed a relevant technique in practice. In particular, in the parallelizing compiler application, a good number of run-time tests are eliminated and invariants extracted automatically from loops, resulting generally in lower overheads and in several cases in increased speedups

    Compiling Prolog to Logic-inference Virtual Machine

    Get PDF
    The Logic-inference Virtual Machine (LVM) is a new Prolog execution model consisting of a set of high-level instructions and memory architecture for handling control and unification. Different from the well-known Warren's Abstract Machine [1], which uses Structure Copying method, the LVM adopts a hybrid of Program Sharing [2] and Structure Copying to represent first-order terms. In addition, the LVM employs a single stack paradigm for dynamic memory allocation and embeds a very efficient garbage collection algorithm to reclaim the useless memory cells. In order to construct a complete Prolog system based on the LVM, a corresponding compiler must be written. In this thesis, a design of such LVM compiler is presented and all important components of the compiler are described. The LVM compiler is developed to translate Prolog programs into LVM bytecode instructions, so that a Prolog program is compiled once and can run anywhere. The first version of LVM compiler (about 8000 lines of C code) has been developed. The compilation time is approximately proportional to the size of source codes. About 80 percent of the time are spent on the global analysis. Some compiled programs have been tested under a LVM emulator. Benchmarks show that the LVM system is very promising in memory utilization and performance

    Partial evaluation in an optimizing prolog compiler

    Get PDF
    Specialization of programs and meta-programs written in high-level languages has been an active area of research for some time. Specialization contributes to improvement in program performance. We begin with a hypothesis that partial evaluation provides a framework for several traditional back-end optimizations. The present work proposes a new compiler back-end optimization technique based on specialization of low-level RISC-like machine code. Partial evaluation is used to specialize the low-level code. Berkeley Abstract Machine (BAM) code generated during compilation of Prolog is used as the candidate low-level language to test the hypothesis. A partial evaluator of BAM code was designed and implemented to demonstrate the proposed optimization technique and to study its design issues. The major contributions of the present work are as follows: It demonstrates a new low-level compiler back-end optimization technique. This technique provides a framework for several conventional optimizations apart from providing opportunity for machine-specific optimizations. It presents a study of various issues and solutions to several problems encountered during design and implementation of a low-level language partial evaluator that is designed to be a back-end phase in a real-world Prolog compiler. We also present an implementation-independent denotational semantics of BAM code--a low-level language. This provides a vehicle for showing the correctness of instruction transformations. We believe this work to provide the first concrete step towards usage of partial evaluation on low-level code as a compiler back-end optimization technique in real-world compilers

    Propagation Networks: A Flexible and Expressive Substrate for Computation

    Get PDF
    PhD thesisI propose a shift in the foundations of computation. Practically all ideas of general-purpose computation today are founded either on execution of sequences of atomic instructions, i.e., assembly languages, or on evaluation of tree-structured expressions, i.e., most higher level programming languages. Both have served us well in the past, but it is increasingly clear that we need something more. I suggest that we can build general-purpose computation on propagation of information through networks of stateful cells interconnected with stateless autonomous asynchronous computing elements. Various forms of this general idea have been used with great success for various special purposes; perhaps the most immediate example is constraint propagation in constraint satisfaction systems. These special-purpose systems, however, are all complex and all different, and neither compose well, nor interoperate well, nor generalize well. A foundational layer is missing. The key insight in this work is that a cell should not be seen as storing a value, but as accumulating information about a value. The cells should never forget information -- such monotonicity prevents race conditions in the behavior of the network. Monotonicity of information need not be a severe restriction: for example, carrying reasons for believing each thing makes it possible to explore but thenpossibly reject tentative hypotheses, thus appearing to undo something, while maintaining monotonicity. Accumulating information is a broad enough design principle to encompass arbitrary computation. The object of this dissertation is therefore to architect a general-purpose computing system based on propagation networks; to subsume expression evaluation under propagation just as instruction execution is subsumed under expression evaluation; to demonstrate that a general-purpose propagation system can recover all the benefits that have been derived from special-purpose propagation systems, allow them to compose andinteroperate, and offer further expressive power beyond what we have known in the past; and finally to contemplate the lessons that such a fundamental shift can teach us about the deep nature of computation.My graduate career in general, and this work in particular, have been sponsored in part by a National Science Foundation Graduate Research Fellowship, by the Disruptive Technology Office as part of the AQUAINT Phase 3 research program, by the Massachusetts Institute of Technology, by Google, Inc., and by the National Science Foundation Cybertrust (05-518) program.Doctor of Philosoph

    Flexible and expressive substrate for computation

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 167-174).In this dissertation I propose a shift in the foundations of computation. Modem programming systems are not expressive enough. The traditional image of a single computer that has global effects on a large memory is too restrictive. The propagation paradigm replaces this with computing by networks of local, independent, stateless machines interconnected with stateful storage cells. In so doing, it offers great flexibility and expressive power, and has therefore been much studied, but has not yet been tamed for general-purpose computation. The novel insight that should finally permit computing with general-purpose propagation is that a cell should not be seen as storing a value, but as accumulating information about a value. Various forms of the general idea of propagation have been used with great success for various special purposes; perhaps the most immediate example is constraint propagation in constraint satisfaction systems. This success is evidence both that traditional linear computation is not expressive enough, and that propagation is more expressive. These special-purpose systems, however, are all complex and all different, and neither compose well, nor interoperate well, nor generalize well. A foundational layer is missing. I present in this dissertation the design and implementation of a prototype general-purpose propagation system. I argue that the structure of the prototype follows from the overarching principle of computing by propagation and of storage by accumulating information-there are no important arbitrary decisions. I illustrate on several worked examples how the resulting organization supports arbitrary computation; recovers the expressivity benefits that have been derived from special-purpose propagation systems in a single general-purpose framework, allowing them to compose and interoperate; and offers further expressive power beyond what we have known in the past. I reflect on the new light the propagation perspective sheds on the deep nature of computation.by Alexey Andreyevich Radul.Ph.D
    corecore