80 research outputs found

    Recurrent ultracomputers are not log n-fast : (preprint)

    Get PDF

    A Simple Vector Language and its Portable Implementation

    Get PDF
    Many explicitly parallel languages have been proposed and implemented, but most such languages are complex and are targeted to specific parallel machines. The goal of this project was to design a very simple, explicitly parallel, programming language which could easily be implemented and ported to a wide variety of machines. The result was AJL, a structured language with deterministic vector-oriented parallelism. AJL programs are first compiled into assembly language instructions for an idealized parallel machine, then these assembly language instructions are macro expanded into C code which implements them for the actual target machine. Finally, the target machine’s “ native” C compiler is used to generate executable code. Macro definitions for “ generic” sequential machines have been implemented; macros for the PASM (PArtitionable Simd Mimd) prototype parallel computer are under development

    Dynamically allocating sets of fine-grained processors to running computations

    Get PDF
    Researchers explore an approach to using general purpose parallel computers which involves mapping hardware resources onto computations instead of mapping computations onto hardware. Problems such as processor allocation, task scheduling and load balancing, which have traditionally proven to be challenging, change significantly under this approach and may become amenable to new attacks. Researchers describe the implementation of this approach used by the FFP Machine whose computation and communication resources are repeatedly partitioned into disjoint groups that match the needs of available tasks from moment to moment. Several consequences of this system are examined

    Parallel computing in combinatorial optimization

    Get PDF

    Parallel algorithms for boundary value problems

    Get PDF
    A general approach to solve boundary value problems numerically in a parallel environment is discussed. The basic algorithm consists of two steps: the local step where all the P available processors work in parallel, and the global step where one processor solves a tridiagonal linear system of the order P. The main advantages of this approach are two fold. First, this suggested approach is very flexible, especially in the local step and thus the algorithm can be used with any number of processors and with any of the SIMD or MIMD machines. Secondly, the communication complexity is very small and thus can be used as easily with shared memory machines. Several examples for using this strategy are discussed
    • …
    corecore