152 research outputs found

    A lisp oriented architecture

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.Includes bibliographical references (p. 63-67).by John W.F. McClain.M.S

    Benchmarking Implementations of Functional Languages with ``Pseudoknot'', a Float-Intensive Benchmark

    Get PDF
    Over 25 implementations of different functional languages are benchmarked using the same program, a floatingpoint intensive application taken from molecular biology. The principal aspects studied are compile time and execution time for the various implementations that were benchmarked. An important consideration is how the program can be modified and tuned to obtain maximal performance on each language implementation.\ud With few exceptions, the compilers take a significant amount of time to compile this program, though most compilers were faster than the then current GNU C compiler (GCC version 2.5.8). Compilers that generate C or Lisp are often slower than those that generate native code directly: the cost of compiling the intermediate form is normally a large fraction of the total compilation time.\ud There is no clear distinction between the runtime performance of eager and lazy implementations when appropriate annotations are used: lazy implementations have clearly come of age when it comes to implementing largely strict applications, such as the Pseudoknot program. The speed of C can be approached by some implemtations, but to achieve this performance, special measures such as strictness annotations are required by non-strict implementations.\ud The benchmark results have to be interpreted with care. Firstly, a benchmark based on a single program cannot cover a wide spectrum of 'typical' applications.j Secondly, the compilers vary in the kind and level of optimisations offered, so the effort required to obtain an optimal version of the program is similarly varied

    Parallel processing for scientific computations

    Get PDF
    The main contribution of the effort in the last two years is the introduction of the MOPPS system. After doing extensive literature search, we introduced the system which is described next. MOPPS employs a new solution to the problem of managing programs which solve scientific and engineering applications on a distributed processing environment. Autonomous computers cooperate efficiently in solving large scientific problems with this solution. MOPPS has the advantage of not assuming the presence of any particular network topology or configuration, computer architecture, or operating system. It imposes little overhead on network and processor resources while efficiently managing programs concurrently. The core of MOPPS is an intelligent program manager that builds a knowledge base of the execution performance of the parallel programs it is managing under various conditions. The manager applies this knowledge to improve the performance of future runs. The program manager learns from experience

    Simple optimizing JIT compilation of higher-order dynamic programming languages

    Get PDF
    ImplĂ©menter efficacement les langages de programmation dynamiques demande beaucoup d’effort de dĂ©veloppement. Les compilateurs ne cessent de devenir de plus en plus complexes. Aujourd’hui, ils incluent souvent une phase d’interprĂ©tation, plusieurs phases de compilation, plusieurs reprĂ©sentations intermĂ©diaires et des analyses de code. Toutes ces techniques permettent d’implĂ©menter efficacement un langage de programmation dynamique, mais leur mise en oeuvre est difficile dans un contexte oĂč les ressources de dĂ©veloppement sont limitĂ©es. Nous proposons une nouvelle approche et de nouvelles techniques dynamiques permettant de dĂ©velopper des compilateurs performants pour les langages dynamiques avec de relativement bonnes performances et un faible effort de dĂ©veloppement. Nous prĂ©sentons une approche simple de compilation Ă  la volĂ©e qui permet d’implĂ©menter un langage en une seule phase de compilation, sans transformation vers des reprĂ©sentations intermĂ©diaires. Nous expliquons comment le versionnement de blocs de base, une technique de compilation existante, peut ĂȘtre Ă©tendue, sans effort de dĂ©veloppement significatif, pour fonctionner interprocĂ©duralement avec les langages de programmation d’ordre supĂ©rieur, permettant d’appliquer des optimisations interprocĂ©durales sur ces langages. Nous expliquons Ă©galement comment le versionnement de blocs de base permet de supprimer certaines opĂ©rations utilisĂ©es pour implĂ©menter les langages dynamiques et qui impactent les performances comme les vĂ©rifications de type. Nous expliquons aussi comment les compilateurs peuvent exploiter les reprĂ©sentations dynamiques des valeurs par Tagging et NaN-boxing pour optimiser le code gĂ©nĂ©rĂ© avec peu d’effort de dĂ©veloppement. Nous prĂ©sentons Ă©galement notre expĂ©rience de dĂ©veloppement d’un compilateur Ă  la volĂ©e pour le langage de programmation Scheme, pour montrer que ces techniques permettent effectivement de construire un compilateur avec un effort moins important que les compilateurs actuels et qu’elles permettent de gĂ©nĂ©rer du code efficace, qui rivalise avec les meilleures implĂ©mentations du langage Scheme.Efficiently implementing dynamic programming languages requires a significant development effort. Over the years, compilers have become more complex. Today, they typically include an interpretation phase, several compilation phases, several intermediate representations and code analyses. These techniques allow efficiently implementing these programming languages but are difficult to implement in contexts in which development resources are limited. We propose a new approach and new techniques to build optimizing just-in-time compilers for dynamic languages with relatively good performance and low development effort. We present a simple just-in-time compilation approach to implement a language with a single compilation phase, without the need to use code transformations to intermediate representations. We explain how basic block versioning, an existing compilation technique, can be extended without significant development effort, to work interprocedurally with higherorder programming languages allowing interprocedural optimizations on these languages. We also explain how basic block versioning allows removing operations used to implement dynamic languages that degrade performance, such as type checks, and how compilers can use Tagging and NaN-boxing to optimize the generated code with low development effort. We present our experience of building a JIT compiler using these techniques for the Scheme programming language to show that they indeed allow building compilers with less development effort than other implementations and that they allow generating efficient code that competes with current mature implementations of the Scheme language

    Minimizing register usage penalty at procedure calls

    Full text link

    A Parametric View of Retargetable Register Allocation

    Get PDF
    We discuss the problems involved in building a retargetable register allocator for use in an optimizing compiler. While the popular "register coloring" method is machine-independent, the allocator as a whole must implement numerous machine-dependent decisions. We present the kinds of information that must be parameterized in order to include register allocation in an retargetable compiler back-end, and discuss a sample solution

    A dynamic compiler for Scheme

    Get PDF
    Traditionally, dynamically-typed languages have been difficult to compile efficiently. This thesis explores dynamic compilation, a recently developed technique for compiling dynamically-typed languages. A dynamic compiler compiles and optimizes programs as they execute, using information collected from the running program to perform optimizations that are impossible to perform in a conventional batch compiler. To explore these techniques we developed SKI, a dynamic compiler for Scheme. Tests on programs compiled by SKI, have shown that dynamic compilation techniques can give a substantial increase in the performance Scheme programs. In some cases they can increase performance by up to 400%

    ASLP: a list processor for artificial intelligence applications.

    Get PDF
    by Cheang Sin Man.Thesis (M.Phil.)--Chinese University of Hong Kong, 1990.Bibliography: leaves 137-140.ABSTRACT --- p.iACKNOWLEDGEMENTS --- p.iiTABLE OF CONTENTS --- p.iiiChapter CHAPTER 1 --- INTRODUCTION --- p.1Chapter 1.1 --- Lisp as an AI Programming Language --- p.1Chapter 1.2 --- Assisting List Processing with Hardware --- p.2Chapter 1.3 --- Simulation Study --- p.2Chapter 1.4 --- Implementation --- p.3Chapter 1.4.1 --- Hardware --- p.3Chapter 1.4.2 --- Software --- p.3Chapter 1.5 --- Performance --- p.4Chapter CHAPTER 2 --- LISP AND EXISTING LISP MACHINES --- p.5Chapter 2.1 --- Lisp and its Internal Structure --- p.5Chapter 2.1.1 --- The List Structure in Lisp --- p.5Chapter 2.1.2 --- Data Types in Lisp --- p.7Chapter 2.1.3 --- Lisp Functions --- p.8Chapter 2.1.4 --- Storage Management of Lisp --- p.9Chapter 2.2 --- Existing Lisp Machines --- p.11Chapter 2.2.1 --- Types of AI Architecture --- p.11Language-Based architecture --- p.11Knowledge-Based architecture --- p.12Semantic networks --- p.12Chapter 2.2.2 --- Lisp Machines --- p.12Solving problems of Lisp --- p.13Chapter 2.2.3 --- Classes of Lisp Machines --- p.14Two M Lisp machine examples --- p.15A class P machine example --- p.17A class S machine example --- p.17The best class for Lisp --- p.19Chapter 2.3 --- Execution Time Analysis of a Lisp System --- p.20Chapter 2.3.1 --- CPU Time Statistics --- p.20Chapter 2.3.2 --- Statistics Analysis --- p.24Chapter CHAPTER 3 --- OVERALL ARCHITECTURE OF THE ASLP --- p.27Chapter 3.1 --- An Arithmetical & Symbolical List Processor --- p.27Chapter 3.2 --- Multiple Memory Modules --- p.30Chapter 3.3 --- Large Number of Registers --- p.31Chapter 3.4 --- Multiple Buses --- p.34Chapter 3.5 --- Special Function Units --- p.35Chapter CHAPTER 4 --- PARALLELISM IN THE ASLP --- p.36Chapter 4.1 --- Parallel Data Movement --- p.36Chapter 4.2 --- Wide Memory Modules --- p.37Chapter 4.3 --- Parallel Memory Access --- p.39Chapter 4.3.1 --- Parallelism and Pipelining --- p.39Chapter 4.4 --- Pipelined Micro-Instructions --- p.40Chapter 4.4.1 --- Memory access pipelining --- p.41Chapter 4.5 --- Performance Estimation --- p.44Chapter 4.6 --- Parallel Execution with the Host Computer --- p.45Chapter CHAPTER 5 --- SIMULATION STUDY OF THE ASLP --- p.47Chapter 5.1 --- Why Simulation is needed for the ASLP? --- p.47Chapter 5.2 --- The Structure of the HOCB Simulator --- p.48Chapter 5.2.1 --- Activity-Oriented Simulation for the ASLP --- p.50Chapter 5.3 --- The Hardware Object Declaration Method --- p.50Chapter 5.4 --- A Register-Level Simulation of the ASLP --- p.53Chapter 5.4.1 --- A List Function Simulation --- p.54Chapter CHAPTER 6 --- DESIGN AND IMPLEMENTATION OF THE ASLP --- p.57Chapter 6.1 --- Hardware --- p.57Chapter 6.1.1 --- Microprogrammable Controller --- p.57The instruction cycle of the micro-controller --- p.59Chapter 6.1.2 --- Chip Selection and Allocation --- p.59Chapter 6.2 --- Software --- p.61Chapter 6.2.1 --- Instruction Passing --- p.61Chapter 6.2.2 --- Microprogram Development --- p.62Microprogram field definition --- p.64Micro-assembly language --- p.65Macro-instructions --- p.65Down-loading of Micro-Codes --- p.66Interfacing to C language --- p.66A Turbo C Function Library --- p.67Chapter CHAPTER 7 --- PERFORMANCE EVALUATION OF THE ASLP 

 --- p.68Chapter 7.1 --- Micro-Functions in the ASLP --- p.68Chapter 7.2 --- Functions in the C Library --- p.71Chapter CHAPTER 8 --- FUNCTIONAL EVALUATION OF THE ASLP --- p.77Chapter 8.1 --- A Relational Database on the ASLP --- p.77Chapter 8.1.1 --- Data Representation --- p.77Chapter 8.1.2 --- Performance of the Database System --- p.79Chapter 8.2 --- Other Potential Applications --- p.80Chapter CHAPTER 9 --- FUTURE DEVELOPMENT OF THE ASLP --- p.81Chapter 9.1 --- An Expert System Shell on the ASLP --- p.81Chapter 9.1.1 --- Definition of Objects --- p.81Chapter 9.1.2 --- Knowledge Representation --- p.84Chapter 9.1.3 --- Knowledge Representation in the ASLP --- p.85Chapter 9.1.4 --- Overall Structure --- p.88Chapter 9.2 --- Reducing the Physical Size by Employing VLSIs --- p.89Chapter CHAPTER 10 --- CONCLUSION --- p.92Chapter APPENDIX A --- BLOCK DIAGRAM --- p.95Chapter APPENDIX B --- ASLP CIRCUIT DIAGRAMS --- p.97Chapter APPENDIX C --- ASLP PC-BOARD LAYOUTS --- p.114Chapter APPENDIX D --- MICRO-CONTROL SIGNAL ASSIGNMENT --- p.121Chapter APPENDIX E --- MICRO-FIELD DEFINITION --- p.124Chapter APPENDIX F --- MACRO DEFINITION --- p.133Chapter APPENDIX G --- REGISTER ASSIGNMENT --- p.134PUBLICATIONS --- p.136REFERENCES --- p.13
    • 

    corecore