7 research outputs found

    Code Generation = A* + BURS

    Get PDF
    A system called BURS that is based on term rewrite systems and a search algorithm A* are combined to produce a code generator that generates optimal code. The theory underlying BURS is re-developed, formalised and explained in this work. The search algorithm uses a cost heuristic that is derived from the termrewrite system to direct the search. The advantage of using a search algorithm is that we need to compute only those costs that may be part of an optimal rewrite sequence

    Using rewriting techniques to produce code generators and proving them correct

    Get PDF
    AbstractA major problem in deriving a compiler from a formal definition is the production of correct and efficient object code. In this context, we propose a solution to the problem of code-generator generation.Our approach is based on a target machine description where the basic concepts used (storage classes, access modes, access classes and instructions) are hierarchically described by tree patterns. These tree patterns are terms of an abstract data type. The program intermediate representation (input to the code generator) is a term of the same abstract data type.The code generation process is based on access modes and instruction template-driven rewritings. The result is that each program instruction is reduced to a sequence of elementary machine instructions, each of them representing an instance of an instruction template.The axioms of the abstract data type are used to prove that the rewritings preserve the semantics of the intermediate representation

    Survey on Instruction Selection: An Extensive and Modern Literature Review

    Full text link
    Instruction selection is one of three optimisation problems involved in the code generator backend of a compiler. The instruction selector is responsible of transforming an input program from its target-independent representation into a target-specific form by making best use of the available machine instructions. Hence instruction selection is a crucial part of efficient code generation. Despite on-going research since the late 1960s, the last, comprehensive survey on the field was written more than 30 years ago. As new approaches and techniques have appeared since its publication, this brings forth a need for a new, up-to-date review of the current body of literature. This report addresses that need by performing an extensive review and categorisation of existing research. The report therefore supersedes and extends the previous surveys, and also attempts to identify where future research should be directed.Comment: Major changes: - Merged simulation chapter with macro expansion chapter - Addressed misunderstandings of several approaches - Completely rewrote many parts of the chapters; strengthened the discussion of many approaches - Revised the drawing of all trees and graphs to put the root at the top instead of at the bottom - Added appendix for listing the approaches in a table See doc for more inf

    Adaptive Affine Sequence Alignment Using Algebraic Dynamic Programming

    Get PDF
    Paaßen B. Adaptive Affine Sequence Alignment Using Algebraic Dynamic Programming. Bielefeld: Bielefeld University; 2015.A core issue in machine learning is the classification of data. However, for data structures that can not easily be summarized in a feature representation, standard vectorial approaches are not suitable. An alternative approach is to represent the data not by features, but by their similarities or disimilarities to each other. In the case of sequential data, dissimilarities can be efficiently calculated by well-established alignment distances. Recently, techniques have been put forward to adapt the parameters of such alignment distances to the specific data set at hand, e.g. using gradient descent on a cost function. In this thesis we provide a comprehensive theory for gradient descent on alignment distance based on Algebraic Dynamic Programming, enabling us to adapt even sophisticated alignment distances. We focus on Affine Sequence Alignment, which we optimize by gradient descent on the Large Margin Nearest Neighbor cost function. Thereby we directly optimize the classification accuracy of the popular k-Nearest Neighbor classifier. We present a free software implementation of this theory, the TCS Alignment Toolbox, which we use for the subsequent experiments. Our experiments entail alignment distance learning on three diverse data sets (two artificial ones and one real-world example), yielding not only an increase in classification accuracy but also interpretable resulting parameter settings
    corecore