147 research outputs found
JACC: An OpenACC Runtime Framework with Kernel-Level and Multi-GPU Parallelization
The rapid development in computing technology has paved the way for
directive-based programming models towards a principal role in maintaining
software portability of performance-critical applications. Efforts on such
models involve a least engineering cost for enabling computational acceleration
on multiple architectures while programmers are only required to add meta
information upon sequential code. Optimizations for obtaining the best possible
efficiency, however, are often challenging. The insertions of directives by the
programmer can lead to side-effects that limit the available compiler
optimization possible, which could result in performance degradation. This is
exacerbated when targeting multi-GPU systems, as pragmas do not automatically
adapt to such systems, and require expensive and time consuming code adjustment
by programmers.
This paper introduces JACC, an OpenACC runtime framework which enables the
dynamic extension of OpenACC programs by serving as a transparent layer between
the program and the compiler. We add a versatile code-translation method for
multi-device utilization by which manually-optimized applications can be
distributed automatically while keeping original code structure and
parallelism. We show in some cases nearly linear scaling on the part of kernel
execution with the NVIDIA V100 GPUs. While adaptively using multi-GPUs, the
resulting performance improvements amortize the latency of GPU-to-GPU
communications.Comment: Extended version of a paper to appear in: Proceedings of the 28th
IEEE International Conference on High Performance Computing, Data, and
Analytics (HiPC), December 17-18, 202
Performance and portability of accelerated lattice Boltzmann applications with OpenACC
An increasingly large number of HPC systems rely on heterogeneous architectures combining traditional multi-core CPUs with power efficient accelerators. Designing efficient applications for these systems have been troublesome in the past as accelerators could usually be programmed using specific programming languages threatening maintainability, portability, and correctness. Several new programming environments try to tackle this problem. Among them, OpenACC offers a high-level approach based on compiler directives to mark regions of existing C, C++, or Fortran codes to run on accelerators. This approach directly addresses code portability, leaving to compilers the support of each different accelerator, but one has to carefully assess the relative costs of portable approaches versus computing efficiency. In this paper, we address precisely this issue, using as a test-bench a massively parallel lattice Boltzmann algorithm. We first describe our multi-node implementation and optimization of the algorithm, using OpenACC and MPI. We then benchmark the code on a variety of processors, including traditional CPUs and GPUs, and make accurate performance comparisons with other GPU implementations of the same algorithm using CUDA and OpenCL. We also asses the performance impact associated with portable programming, and the actual portability and performance-portability of OpenACC-based applications across several state-of-the-art architectures
Design and optimization of a portable LQCD Monte Carlo code using OpenACC
The present panorama of HPC architectures is extremely heterogeneous, ranging
from traditional multi-core CPU processors, supporting a wide class of
applications but delivering moderate computing performance, to many-core GPUs,
exploiting aggressive data-parallelism and delivering higher performances for
streaming computing applications. In this scenario, code portability (and
performance portability) become necessary for easy maintainability of
applications; this is very relevant in scientific computing where code changes
are very frequent, making it tedious and prone to error to keep different code
versions aligned. In this work we present the design and optimization of a
state-of-the-art production-level LQCD Monte Carlo application, using the
directive-based OpenACC programming model. OpenACC abstracts parallel
programming to a descriptive level, relieving programmers from specifying how
codes should be mapped onto the target architecture. We describe the
implementation of a code fully written in OpenACC, and show that we are able to
target several different architectures, including state-of-the-art traditional
CPUs and GPUs, with the same code. We also measure performance, evaluating the
computing efficiency of our OpenACC code on several architectures, comparing
with GPU-specific implementations and showing that a good level of
performance-portability can be reached.Comment: 26 pages, 2 png figures, preprint of an article submitted for
consideration in International Journal of Modern Physics
Utilizing GPUs to Accelerate Turbomachinery CFD Codes
GPU computing has established itself as a way to accelerate parallel codes in the high performance computing world. This work focuses on speeding up APNASA, a legacy CFD code used at NASA Glenn Research Center, while also drawing conclusions about the nature of GPU computing and the requirements to make GPGPU worthwhile on legacy codes. Rewriting and restructuring of the source code was avoided to limit the introduction of new bugs. The code was profiled and investigated for parallelization potential, then OpenACC directives were used to indicate parallel parts of the code. The use of OpenACC directives was not able to reduce the runtime of APNASA on either the NVIDIA Tesla discrete graphics card, or the AMD accelerated processing unit. Additionally, it was found that in order to justify the use of GPGPU, the amount of parallel work being done within a kernel would have to greatly exceed the work being done by any one portion of the APNASA code. It was determined that in order for an application like APNASA to be accelerated on the GPU, it should not be modular in nature, and the parallel portions of the code must contain a large portion of the code's computation time
Evaluation of Image Pixels Similarity Measurement Algorithm Accelerated on GPU with OpenACC
OpenACC is a directive based parallel programming library that allows for easy acceleration of existing C, C++ and Fortran based applications with minimal code modifications. By annotating the bottleneck causing section of the code with OpenACC directives, the acceleration of the code can be simplified, leading for high portability of performance across different target Graphic Processing Units (GPUs). In this work, the portability of an implemented parallelizable chi-square based pixel similarity measurement algorithm has been evaluated on two consumer and professional grade GPUs. To our best knowledge, this is the first performance evaluation report that utilizes the OpenACC optimization clauses (collapse and tile) on different GPUs to process a less workload (low resolution image of 581x429 pixels) and a heavy workload (high resolution image of 4500 x 3500 pixels) to demonstrate the effectiveness and high portability of OpenACC
ACC Saturator: Automatic Kernel Optimization for Directive-Based GPU Code
Automatic code optimization is a complex process that typically involves the
application of multiple discrete algorithms that modify the program structure
irreversibly. However, the design of these algorithms is often monolithic, and
they require repetitive implementation to perform similar analyses due to the
lack of cooperation. To address this issue, modern optimization techniques,
such as equality saturation, allow for exhaustive term rewriting at various
levels of inputs, thereby simplifying compiler design.
In this paper, we propose equality saturation to optimize sequential codes
utilized in directive-based programming for GPUs. Our approach simultaneously
realizes less computation, less memory access, and high memory throughput. Our
fully-automated framework constructs single-assignment forms from inputs to be
entirely rewritten while keeping dependencies and extracts optimal cases.
Through practical benchmarks, we demonstrate a significant performance
improvement on several compilers. Furthermore, we highlight the advantages of
computational reordering and emphasize the significance of memory-access order
for modern GPUs
- …