21 research outputs found

    A Scalable Object-Based Architecture

    No full text
    Although large-scale shared-memory multiprocessors are believed to be easier to program than disjoint-memory multicomputers with similar numbers of processors, they have proven harder to build. To date, the efficiency of software implementations of virtual shared-memory (VSM) on multicomputers with even a modest number of processors have not approached that of a physical shared-memory. Often VSMs are implemented by using the local memories of processors as caches for shared data. The overhead of maintaining the consistency of these caches, both in terms of processing time and bandwidth consumed, is a major contributor to the inefficiency of the implementations. In this paper, we describe an object-based scheme for implementing a VSM on hierarchical multicomputers that is both efficient and scalable. By implementing an object-based style of programming at a low level, we are able to make effective use of bandwidth, while giving support for modern programming languages. We call our VSM scheme multiple instruction single data (MISD) since processors controlled by separate instruction streams operate, conceptually at least, on a single broadcast stream of shared data. MISD relies on an efficient software coherency scheme and on dedicated hardware to achieve its performance. The hardware required by MISD need not be special purpose. Indeed, MISD can be incorporated into existing multiple computer systems, or an MISD machine can be constructed from off-the-shelf components

    Using program visualization for tuning parallel-loop scheduling

    No full text

    Scheduling Divisible Workloads Using the Adaptive Time Factoring Algorithm

    No full text

    A Performance-Based Parallel Loop Self-scheduling on Grid Computing Environments

    No full text
    corecore