691 research outputs found

    A Machine-Independent APL Interpreter

    Full text link
    Available in IEEE Xplore digital library.The problem of writing machine-independent APL interpreters is solved by means of a systems programming approach making use of an intermediate level language specially designed for that purpose. This paper describes the language, as well as the procedure used to build universal interpreters. Three compilers that translate this language for three different machines have been written so far, and an APL interpreter has been finishe

    Feasibility study of an Integrated Program for Aerospace vehicle Design (IPAD). Volume 4: IPAD system design

    Get PDF
    The computing system design of IPAD is described and the requirements which form the basis for the system design are discussed. The system is presented in terms of a functional design description and technical design specifications. The functional design specifications give the detailed description of the system design using top-down structured programming methodology. Human behavioral characteristics, which specify the system design at the user interface, security considerations, and standards for system design, implementation, and maintenance are also part of the technical design specifications. Detailed specifications of the two most common computing system types in use by the major aerospace companies which could support the IPAD system design are presented. The report of a study to investigate migration of IPAD software between the two candidate 3rd generation host computing systems and from these systems to a 4th generation system is included

    Cracking the 500-Language Problem

    Get PDF

    Analysis and improvement of a multi-pass compiler for a pipeline architecture

    Get PDF
    In this thesis a parallel environment for the execution of a multi-pass Pascal compiler is considered. Some possible and appropriate ways to speed up each pass of the parallelized compiler are investigated. In addition, a new approach, using the concepts of software science, is explored for obtaining gross performance characteristics of a multi-pass compiler;A pipeline architecture is used for the parallel compilation. The performance characteristics of the pipelined compiler are determined by a trace-driven simulation of the pipelined compiler. The actions in the multi-processor system are synchronized by an event-driven simulation of the pipeline system. The pipelined compiler and possible improvements are analyzed in terms of the location of the bottleneck, queue size, overhead factor, and partition policy. The lexical analysis phase is found to be the initial bottleneck. The improvement of this phase and its effects on the other phases are presented. Also, possible methods for improving the non-lexical analysis phases are investigated based on a study of the data structures and operations of these phases;For obtaining gross performance characteristics of a multi-pass compiler, an analysis based only on the intermediate code files is performed. One of the key concepts in Halstead\u27s software science, called the language level, is applied to this analysis. From the experimental results and statistical verification it is found that there exists a strong correlation between the stand-alone execution time and language level

    A new parallelisation technique for heterogeneous CPUs

    Get PDF
    Parallelization has moved in recent years into the mainstream compilers, and the demand for parallelizing tools that can do a better job of automatic parallelization is higher than ever. During the last decade considerable attention has been focused on developing programming tools that support both explicit and implicit parallelism to keep up with the power of the new multiple core technology. Yet the success to develop automatic parallelising compilers has been limited mainly due to the complexity of the analytic process required to exploit available parallelism and manage other parallelisation measures such as data partitioning, alignment and synchronization. This dissertation investigates developing a programming tool that automatically parallelises large data structures on a heterogeneous architecture and whether a high-level programming language compiler can use this tool to exploit implicit parallelism and make use of the performance potential of the modern multicore technology. The work involved the development of a fully automatic parallelisation tool, called VSM, that completely hides the underlying details of general purpose heterogeneous architectures. The VSM implementation provides direct and simple access for users to parallelise array operations on the Cell’s accelerators without the need for any annotations or process directives. This work also involved the extension of the Glasgow Vector Pascal compiler to work with the VSM implementation as a one compiler system. The developed compiler system, which is called VP-Cell, takes a single source code and parallelises array expressions automatically. Several experiments were conducted using Vector Pascal benchmarks to show the validity of the VSM approach. The VP-Cell system achieved significant runtime performance on one accelerator as compared to the master processor’s performance and near-linear speedups over code runs on the Cell’s accelerators. Though VSM was mainly designed for developing parallelising compilers it also showed a considerable performance by running C code over the Cell’s accelerators

    Automatic parallelization of array-oriented programs for a multi-core machine

    Get PDF
    Abstract We present the work on automatic parallelization of array-oriented programs for multi-core machines. Source programs written in standard APL are translated by a parallelizing APL-to-C compiler into parallelized C code, i.e. C mixed with OpenMP directives. We describe techniques such as virtual operations and datapartitioning used to effectively exploit parallelism structured around array-primitives. We present runtime performance data, showing the speedup of the resulting parallelized code, using different numbers of threads and different problem sizes, on a 4-core machine, for several examples

    Mission and data operations IBM 360 user's guide

    Get PDF
    The M and DO computer systems are introduced and supplemented. The hardware and software status is discussed, along with standard processors and user libraries. Data management techniques are presented, as well as machine independence, debugging facilities, and overlay considerations

    Automatic Generation of Data Conversions - Programs Using a Data Description Language (DDL) Volume 1 and 2

    Get PDF
    The report describes a DDL/DML Processor and a methodology to automatically generate data conversion programs. The Processor, accepts as input descriptions of source and target files in a Data Description Language (DDL) and a Data Manipulation Language (DML). It produces an output conversion program in PL/l capable of converting the source file and producing the target file

    A computer-aided design for digital filter implementation

    Get PDF
    Imperial Users onl

    Retrospective on high-level language computer architecture

    Full text link
    High-level language computers (HLLC) have attracted interest in the architectural and programming community during the last 15 years; proposals have been made for machines directed towards the execution of various languages such as ALGOL, 1,2 APL, 3,4,5 BASIC, 6.
    • …
    corecore