12 research outputs found

    Concurrent Scheme

    Get PDF
    Journal ArticleThis paper describes an evolution of the Scheme language to support parallelism with tight coupling of control and data. Mechanisms are presented to address the difficult and related problems of mutual exclusion and data sharing which arise in concurrent language systems. The mechanisms are tailored to preserve Scheme semantics as much as possible while allowing for efficient implementation. Prototype implementations of the resulting language are described which have been completed. A third implementation is underway for the Mayfly, a distributed memory, twisted-torus communication topology, parallel processor, under development at the Hewlett-Packard Research Laboratories. The language model is particularly well suited for the Mayfly processor, as will be shown

    MARS: aRISC-based architecture for Lisp

    Get PDF
    [[abstract]]A RISC-based chip set architecture for Lisp is presented in this paper. This architecture contains an instruction fetch unit (IFU) and three processing units—integer processing unit (IPU), floating-point processing unit (FPU), and list processing unit (LPU). The IFU feeds instructions to the processing units and supports fast procedure call/return and branch, the IPU and FPU execute operations of different data type, and the LPU handles the Lisp runtime environment, dynamic type checking, and fast list access. In this architecture, the critical path of complex register file access and ALU operation is distributed into the LPU and IPU, and the tracing of a list can be done quickly by the non-delayed car or cdr instructions of the LPU. Performance simulation shows that this architecture would be about 6.2 times faster than SPUR and about 2.2 times faster than MIPS-X.[[booktype]]紙本[[booktype]]電子

    Three Highly Parallel Computer Architectures and Their Suitability for Three Representative Artificial Intelligence Problems

    Get PDF
    Virtually all current Artificial Intelligence (AI) applications are designed to run on sequential (von Neumann) computer architectures. As a result, current systems do not scale up. As knowledge is added to these systems, a point is reached where their performance quickly degrades. The performance of a von Neumann machine is limited by the bandwidth between memory and processor (the von Neumann bottleneck). The bottleneck is avoided by distributing the processing power across the memory of the computer. In this scheme the memory becomes the processor (a smart memory ). This paper highlights the relationship between three representative AI application domains, namely knowledge representation, rule-based expert systems, and vision, and their parallel hardware realizations. Three machines, covering a wide range of fundamental properties of parallel processors, namely module granularity, concurrency control, and communication geometry, are reviewed: the Connection Machine (a fine-grained SIMD hypercube), DADO (a medium-grained MIMD/SIMD/MSIMD tree-machine), and the Butterfly (a coarse-grained MIMD Butterflyswitch machine)

    Programming Languages for Distributed Computing Systems

    Get PDF
    When distributed systems first appeared, they were programmed in traditional sequential languages, usually with the addition of a few library procedures for sending and receiving messages. As distributed applications became more commonplace and more sophisticated, this ad hoc approach became less satisfactory. Researchers all over the world began designing new programming languages specifically for implementing distributed applications. These languages and their history, their underlying principles, their design, and their use are the subject of this paper. We begin by giving our view of what a distributed system is, illustrating with examples to avoid confusion on this important and controversial point. We then describe the three main characteristics that distinguish distributed programming languages from traditional sequential languages, namely, how they deal with parallelism, communication, and partial failures. Finally, we discuss 15 representative distributed languages to give the flavor of each. These examples include languages based on message passing, rendezvous, remote procedure call, objects, and atomic transactions, as well as functional languages, logic languages, and distributed data structure languages. The paper concludes with a comprehensive bibliography listing over 200 papers on nearly 100 distributed programming languages

    Engineering handbook

    Get PDF
    1995 handbook for the faculty of Engineerin

    First Annual Workshop on Space Operations Automation and Robotics (SOAR 87)

    Get PDF
    Several topics relative to automation and robotics technology are discussed. Automation of checkout, ground support, and logistics; automated software development; man-machine interfaces; neural networks; systems engineering and distributed/parallel processing architectures; and artificial intelligence/expert systems are among the topics covered

    Engineering handbook

    Get PDF
    1996 handbook for the faculty of Engineerin

    1989-1990 Louisiana Tech University Catalog

    Get PDF
    The Louisiana Tech University Catalog includes announcements and course descriptions for courses offered at Louisiana Tech University for the academic year of 1989-1990.https://digitalcommons.latech.edu/university-catalogs/1025/thumbnail.jp

    A parallel process model and architecture for a Pure Logic Language

    Get PDF
    The research presented in this thesis has been concerned with the use of parallel logic systems for the implementation of large knowledge bases. The thesis describes proposals for a parallel logic system based on a new logic programming language, the Pure Logic Language. The work has involved the definition and implementation of a new logic interpreter which incorporates the parallel execution of independent OR processes, and the specification and design of an appropriate non shared memory multiprocessor architecture. The Pure Logic Language which is under development at JeL, Bracknell, differs from Prolog in its expressive powers and implementation. The resolution based Prolog approach is replaced by a rewrite rule technique which successively transforms expressions according to logical axioms and user defined rules until no further rewrites are possible. A review of related work in the field of parallel logic language systems is presented. The thesis describes the different forms of parallelism within logic languages and discusses the decision to concentrate on the efficient implementation of OR parallelism. The parallel process model for the Pure Logic Language uses the same execution technique of rule rewriting but has been adapted to implement the creation of independent OR processes and the required message passing operations. The parallelism in the system is implemented automatically and, unlike many other parallel logic systems there are no explicit program annotations for the control of parallel execution. The spawning of processes involves computational overheads within the interpreter: these have been measured and results are presented. The functional requirements of a multiprocessor architecture are discussed: shared memory machines are not scalable for large numbers of processing elements, but, with no shared memory, data needed by offspring processors must be copied from the parent or else recomputed. The thesis describes an optimised format for the copying of data between processors. Because a one-to-many communication pattern exits between parent and offspring processors a broadcast architecture is indicated. The development of a system based on the broadcasting of data packets represents a new approach to the parallel execution of logic languages and has led to the design of a novel bus based multiprocessor architecture. A simulation of this multiprocessor architecture has been produced and the parallel logic interpreter mapped onto it: this provides data on the predicted performance of the system. A detailed analysis of these results is presented and the implications for future developments to the proposed system are discussed.</p
    corecore