3 research outputs found

    Automatic Translation of Data Parallel Programs for Heterogeneous Parallelism Through OpenMP Offloading

    Get PDF
    Heterogeneous multicores like GPGPUs are now commonplace in modern computing systems. Although heterogeneous multicores offer the potential for high performance, programmers are struggling to program such systems. This paper presents OAO, a compiler-based approach to automatically translate shared-memory OpenMP data-parallel programs to run on heterogeneous multicores through OpenMP offloading directives. Given the large user base of shared memory OpenMP programs, our approach allows programmers to continue using a single-source-based programming language that they are familiar with while benefiting from the heterogeneous performance. OAO introduces a novel runtime optimization scheme to automatically eliminate unnecessary host–device communication to minimize the communication overhead between the host and the accelerator device. We evaluate OAO by applying it to 23 benchmarks from the PolyBench and Rodinia suites on two distinct GPU platforms. Experimental results show that OAO achieves up to 32×× speedup over the original OpenMP version, and can reduce the host–device communication overhead by up to 99% over the hand-translated version

    SemCache++: semantics-aware caching for efficient multi-GPU offloading

    No full text

    SemCache++: semantics-aware caching for efficient multi-GPU offloading

    No full text
    corecore