112 research outputs found

    A machine-independent microprogram development system

    Get PDF
    The aims of this project are twofold. They are firstly, to implement a microprogram development system that allows the programmer to write microcode for any microprogrammable machine, and secondly, to build a microprogrammable machine, incorporating the user friendliness of a simulator, while still providing the 'hands on' experience obtained actual hardware. Microprogram development involves a two stage process. The first step is to describe the target machine, using format descriptions and mnemonic-based template definitions. The second stage involves using the defined mnemonics to write the microcodes for the target machine. This includes an assembly phase to translate the mnemonics into the binary microinstructions. Three main components constitute the microprogrammable machine. The Arithmetic and Logic Unit (ALU) is built using chips from Advanced Micro Devices' Am29ØØ bit-slice family, the action of the Microprogram Control Unit (MCU) is simulated by software running on an IBM Personal Computer, and a section of the IBM PC's main memory acts as the Control Store (CS) for the system. The ALU is built on a prototyping card that plugs into one of the slots on the IBM PC's mother board. A hardware simulator program, that produces the effect of the ALU, has also been developed. A small assembly language has been developed using the system, to test the various functions of the system. A mini-assembler has also been written to facilitate assembly of the above language. A group of honours students at Rhodes University tested the microprogram development system. Their ideas and suggestions have been tabulated in this report and some of them have been used to enhance the system's performance. The concept of allowing 'inline' microinstructions in the macroprogram is also investigated in this report and a method of implementing this is shown

    ASLP: a list processor for artificial intelligence applications.

    Get PDF
    by Cheang Sin Man.Thesis (M.Phil.)--Chinese University of Hong Kong, 1990.Bibliography: leaves 137-140.ABSTRACT --- p.iACKNOWLEDGEMENTS --- p.iiTABLE OF CONTENTS --- p.iiiChapter CHAPTER 1 --- INTRODUCTION --- p.1Chapter 1.1 --- Lisp as an AI Programming Language --- p.1Chapter 1.2 --- Assisting List Processing with Hardware --- p.2Chapter 1.3 --- Simulation Study --- p.2Chapter 1.4 --- Implementation --- p.3Chapter 1.4.1 --- Hardware --- p.3Chapter 1.4.2 --- Software --- p.3Chapter 1.5 --- Performance --- p.4Chapter CHAPTER 2 --- LISP AND EXISTING LISP MACHINES --- p.5Chapter 2.1 --- Lisp and its Internal Structure --- p.5Chapter 2.1.1 --- The List Structure in Lisp --- p.5Chapter 2.1.2 --- Data Types in Lisp --- p.7Chapter 2.1.3 --- Lisp Functions --- p.8Chapter 2.1.4 --- Storage Management of Lisp --- p.9Chapter 2.2 --- Existing Lisp Machines --- p.11Chapter 2.2.1 --- Types of AI Architecture --- p.11Language-Based architecture --- p.11Knowledge-Based architecture --- p.12Semantic networks --- p.12Chapter 2.2.2 --- Lisp Machines --- p.12Solving problems of Lisp --- p.13Chapter 2.2.3 --- Classes of Lisp Machines --- p.14Two M Lisp machine examples --- p.15A class P machine example --- p.17A class S machine example --- p.17The best class for Lisp --- p.19Chapter 2.3 --- Execution Time Analysis of a Lisp System --- p.20Chapter 2.3.1 --- CPU Time Statistics --- p.20Chapter 2.3.2 --- Statistics Analysis --- p.24Chapter CHAPTER 3 --- OVERALL ARCHITECTURE OF THE ASLP --- p.27Chapter 3.1 --- An Arithmetical & Symbolical List Processor --- p.27Chapter 3.2 --- Multiple Memory Modules --- p.30Chapter 3.3 --- Large Number of Registers --- p.31Chapter 3.4 --- Multiple Buses --- p.34Chapter 3.5 --- Special Function Units --- p.35Chapter CHAPTER 4 --- PARALLELISM IN THE ASLP --- p.36Chapter 4.1 --- Parallel Data Movement --- p.36Chapter 4.2 --- Wide Memory Modules --- p.37Chapter 4.3 --- Parallel Memory Access --- p.39Chapter 4.3.1 --- Parallelism and Pipelining --- p.39Chapter 4.4 --- Pipelined Micro-Instructions --- p.40Chapter 4.4.1 --- Memory access pipelining --- p.41Chapter 4.5 --- Performance Estimation --- p.44Chapter 4.6 --- Parallel Execution with the Host Computer --- p.45Chapter CHAPTER 5 --- SIMULATION STUDY OF THE ASLP --- p.47Chapter 5.1 --- Why Simulation is needed for the ASLP? --- p.47Chapter 5.2 --- The Structure of the HOCB Simulator --- p.48Chapter 5.2.1 --- Activity-Oriented Simulation for the ASLP --- p.50Chapter 5.3 --- The Hardware Object Declaration Method --- p.50Chapter 5.4 --- A Register-Level Simulation of the ASLP --- p.53Chapter 5.4.1 --- A List Function Simulation --- p.54Chapter CHAPTER 6 --- DESIGN AND IMPLEMENTATION OF THE ASLP --- p.57Chapter 6.1 --- Hardware --- p.57Chapter 6.1.1 --- Microprogrammable Controller --- p.57The instruction cycle of the micro-controller --- p.59Chapter 6.1.2 --- Chip Selection and Allocation --- p.59Chapter 6.2 --- Software --- p.61Chapter 6.2.1 --- Instruction Passing --- p.61Chapter 6.2.2 --- Microprogram Development --- p.62Microprogram field definition --- p.64Micro-assembly language --- p.65Macro-instructions --- p.65Down-loading of Micro-Codes --- p.66Interfacing to C language --- p.66A Turbo C Function Library --- p.67Chapter CHAPTER 7 --- PERFORMANCE EVALUATION OF THE ASLP …… --- p.68Chapter 7.1 --- Micro-Functions in the ASLP --- p.68Chapter 7.2 --- Functions in the C Library --- p.71Chapter CHAPTER 8 --- FUNCTIONAL EVALUATION OF THE ASLP --- p.77Chapter 8.1 --- A Relational Database on the ASLP --- p.77Chapter 8.1.1 --- Data Representation --- p.77Chapter 8.1.2 --- Performance of the Database System --- p.79Chapter 8.2 --- Other Potential Applications --- p.80Chapter CHAPTER 9 --- FUTURE DEVELOPMENT OF THE ASLP --- p.81Chapter 9.1 --- An Expert System Shell on the ASLP --- p.81Chapter 9.1.1 --- Definition of Objects --- p.81Chapter 9.1.2 --- Knowledge Representation --- p.84Chapter 9.1.3 --- Knowledge Representation in the ASLP --- p.85Chapter 9.1.4 --- Overall Structure --- p.88Chapter 9.2 --- Reducing the Physical Size by Employing VLSIs --- p.89Chapter CHAPTER 10 --- CONCLUSION --- p.92Chapter APPENDIX A --- BLOCK DIAGRAM --- p.95Chapter APPENDIX B --- ASLP CIRCUIT DIAGRAMS --- p.97Chapter APPENDIX C --- ASLP PC-BOARD LAYOUTS --- p.114Chapter APPENDIX D --- MICRO-CONTROL SIGNAL ASSIGNMENT --- p.121Chapter APPENDIX E --- MICRO-FIELD DEFINITION --- p.124Chapter APPENDIX F --- MACRO DEFINITION --- p.133Chapter APPENDIX G --- REGISTER ASSIGNMENT --- p.134PUBLICATIONS --- p.136REFERENCES --- p.13

    Minicomputer Concepts

    Get PDF
    This thesis presents a study of concepts used in the design of minicomputers currently on the market. The material is drawn from research on sixteen minicomputer systems.Computing and Information Science

    The Second Hungarian Workshop on Image Analysis : Budapest, June 7-9, 1988.

    Get PDF

    Techniques for power system simulation using multiple processors

    Get PDF
    The thesis describes development work which was undertaken to improve the speed of a real-time power system simulator used for the development and testing of control schemes. The solution of large, highly sparse matrices was targeted because this is the most time-consuming part of the current simulator. Major improvements in the speed of the matrix ordering phase of the solution were achieved through the development of a new ordering strategy. This was thoroughly investigated, and is shown to provide important additional improvements compared to standard ordering methods, in reducing path length and minimising potential pipeline stalls. Alterations were made to the remainder of the solution process which provided more flexibility in scheduling calculations. This was used to dramatically ease the run-time generation of efficient code, dedicated to the solution of one matrix structure, and also to reduce memory requirements. A survey of the available microprocessors was performed, which concluded that a special-purpose design could best implement the code generated at run-time, and a design was produced using a microprogrammable floating-point processor, which matched the code produced by the earlier work. A method of splitting the matrix solution onto parallel processors was investigated, and two methods of producing network splits were developed and their results compared. The best results from each method were found to agree well, with a predicted three-fold speed-up for the matrix solution of the C.E.G.B. transmission system from the use of six processors. This gain will increase for the whole simulator. A parallel processing topology of the partitioned network and produce the necessary structures for the remainder of the solution process

    The home computer: the making of a consumer electronic

    Get PDF
    corecore