1,006 research outputs found

    Acceleration of a Full-scale Industrial CFD Application with OP2

    Get PDF

    PAEAN : portable and scalable runtime support for parallel Haskell dialects

    Get PDF
    Over time, several competing approaches to parallel Haskell programming have emerged. Different approaches support parallelism at various different scales, ranging from small multicores to massively parallel high-performance computing systems. They also provide varying degrees of control, ranging from completely implicit approaches to ones providing full programmer control. Most current designs assume a shared memory model at the programmer, implementation and hardware levels. This is, however, becoming increasingly divorced from the reality at the hardware level. It also imposes significant unwanted runtime overheads in the form of garbage collection synchronisation etc. What is needed is an easy way to abstract over the implementation and hardware levels, while presenting a simple parallelism model to the programmer. The PArallEl shAred Nothing runtime system design aims to provide a portable and high-level shared-nothing implementation platform for parallel Haskell dialects. It abstracts over major issues such as work distribution and data serialisation, consolidating existing, successful designs into a single framework. It also provides an optional virtual shared-memory programming abstraction for (possibly) shared-nothing parallel machines, such as modern multicore/manycore architectures or cluster/cloud computing systems. It builds on, unifies and extends, existing well-developed support for shared-memory parallelism that is provided by the widely used GHC Haskell compiler. This paper summarises the state-of-the-art in shared-nothing parallel Haskell implementations, introduces the PArallEl shAred Nothing abstractions, shows how they can be used to implement three distinct parallel Haskell dialects, and demonstrates that good scalability can be obtained on recent parallel machines.PostprintPeer reviewe

    Efficient shared memory message passing for inter-VM communications

    Get PDF
    Thanks to recent advances in virtualization technologies, it is now possible to benefit from the flexibility brought by virtual machines at little cost in terms of CPU performance. However on HPC clusters some overheads remain which prevent widespread usage of virtualization. In this article, we tackle the issue of inter-VM MPI communications when VMs are located on the same physical machine. To achieve this we introduce a virtual device which provides a simple message passing API to the guest OS. This interface can then be used to implement an efficient MPI library for virtual machines. The use of a virtual device makes our solution easily portable across multiple guest operating systems since it only requires a small driver to be written for this device. We present an implementation based on Linux, the KVM hypervisor and Qemu as its userspace device emulator. Our implementation achieves near native performance in terms of MPI latency and bandwidth

    Generalized Portable SHMEM library for high performance computing

    Get PDF
    The Generalized Portable SHMEM library (GPSHMEM) is a portable implementation of the SHMEM library originally released by Cray Research Inc. on the Cray T3D. SHMEM and GPSHMEM realize the distributed shared memory programming model, that is, a shared memory programming model in environments in which memory is physically distributed. It is intended for use on a large variety of hardware platforms, including distributed systems with a network interconnect. The programming interface of GPSHMEM follows that of SHMEM and includes remote memory access operations (one-sided communication) and a set of collective routines such as broadcast, collection and reduction. Programming interfaces for C and Fortran are provided. Because of the minimal assumptions about the underlying hardware, GPSHMEM does not implement the full SHMEM T3D interface. The lack of a few functions is compensated by a set of extensions, including dynamic memory allocation for Fortran 77. To ease porting of SHMEM-enabled scientific Fortran 77 code from the Cray machines to use with GPSHMEM, a specialized Fortran 77 preprocessor was designed and developed
    corecore