5 research outputs found
MG3MConv: Multi-Grained Matrix-Multiplication-Mapping Convolution Algorithm toward the SW26010 Processor
As the core of artificial intelligence applications, the research of
convolution has become a hot topic in high performance computing. With the
rapid development of the emerging SW26010 processor in artificial intelligence,
there is an urgent need for high-performance convolution algorithms on the
processor. However, the current support of convolution on SW26010 is still
rudimentary. The only studies provide sufficient runtime peak performance but
lack the adaptability to various convolution scenes. To perfect convolution
algorithms on SW26010, we propose a multi-grained matrix-multiplication-mapping
convolution algorithm called MG3MConv, which targets the architectural features
of SW26010. MG3MConv supports diversified mapping schemes of convolution tasks
based on the concept of the thread block proposed in this paper. All the
architecture-oriented optimization methods are elaborately designed from four
levels to fully exploit the hardware efficiency of SW26010. The experiments
show that the hardware efficiency of MG3MConv can reach 84.78% in max, which is
1.75 times compared with that of cuDNN based on NVIDIA K80m GPU. Moreover,
MG3MConv can overperform cuDNN in most convolution scenes. We also use six
representative CNNs as real-world cases, and the hardware efficiency of
MG3MConv reaches up to 67.04% on the VGG network model, which is 1.37 times and
1.96 times that of cuDNN and swDNN, respectively
Towards Exascale Computation for Turbomachinery Flows
A state-of-the-art large eddy simulation code has been developed to solve
compressible flows in turbomachinery. The code has been engineered with a high
degree of scalability, enabling it to effectively leverage the many-core
architecture of the new Sunway system. A consistent performance of 115.8
DP-PFLOPs has been achieved on a high-pressure turbine cascade consisting of
over 1.69 billion mesh elements and 865 billion Degree of Freedoms (DOFs). By
leveraging a high-order unstructured solver and its portability to large
heterogeneous parallel systems, we have progressed towards solving the grand
challenge problem outlined by NASA, which involves a time-dependent simulation
of a complete engine, incorporating all the aerodynamic and heat transfer
components.Comment: SC23, November, 2023, Denver, CO., US
Seamless optimization of the GEMM kernel for task-based programming models
The general matrix-matrix multiplication (GEMM) kernel is a fundamental building block of many scientific applications. Many libraries such as Intel MKL and BLIS provide highly optimized sequential and parallel versions of this kernel. The parallel implementations of the GEMM kernel rely on the well-known fork-join execution model to exploit multi-core systems efficiently. However, these implementations are not well suited for task-based applications as they break the data-flow execution model. In this paper, we present a task-based implementation of the GEMM kernel that can be seamlessly leveraged by task-based applications while providing better performance than the fork-join version. Our implementation leverages several advanced features of the OmpSs-2 programming model and a new heuristic to select the best parallelization strategy and blocking parameters based on the matrix and hardware characteristics. When evaluating the performance and energy consumption on two modern multi-core systems, we show that our implementations provide significant performance improvements over an optimized OpenMP fork-join implementation, and can beat vendor implementations of the GEMM (e.g., Intel MKL and AMD AOCL). We also demonstrate that a real application can leverage our optimized task-based implementation to enhance performance.Peer ReviewedPostprint (author's final draft
Low-power System-on-Chip Processors for Energy Efficient High Performance Computing: The Texas Instruments Keystone II
The High Performance Computing (HPC) community recognizes energy
consumption as a major problem. Extensive research is underway to
identify means to increase energy efficiency of HPC systems
including consideration of alternative
building blocks for future systems. This thesis considers one
such system, the Texas Instruments Keystone II, a heterogeneous
Low-Power System-on-Chip (LPSoC) processor that combines a quad
core ARM CPU with an octa-core Digital Signal Processor (DSP). It
was first released in 2012.
Four issues are considered: i) maximizing the Keystone II ARM CPU
performance; ii) implementation and extension of the OpenMP
programming model for the Keystone II; iii) simultaneous use of
ARM and DSP cores across multiple Keystone SoCs; and iv) an
energy model for applications running on LPSoCs like the Keystone
II and heterogeneous systems in general.
Maximizing the performance of the ARM CPU on the Keystone II
system is fundamental to adoption of this system by the HPC
community and, of the ARM architecture more broadly. Key to
achieving good performance is exploitation of the ARM vector
instructions. This thesis presents the first detailed comparison
of the use of ARM compiler intrinsic functions with automatic
compiler vectorization across four generations of ARM processors.
Comparisons are also made with x86 based platforms and the use of
equivalent Intel vector instructions.
Implementation of the OpenMP programming model on the Keystone II
system presents both challenges and opportunities. Challenges in
that the OpenMP model was originally developed for a homogeneous
programming environment with a common instruction set
architecture, and in 2012 work had only just begun to consider
how OpenMP might work with accelerators. Opportunities in that
shared memory is accessible to all processing elements on the
LPSoC, offering performance advantages over what typically exists
with attached accelerators. This thesis presents an analysis of a
prototype version of OpenMP implemented as a bare-metal runtime
on the DSP of a Keystone I system. An implementation for the
Keystone II that maps OpenMP 4.0 accelerator directives to OpenCL
runtime library operations is presented and evaluated.
Exploitation of some of the underlying hardware features of the
Keystone II is also discussed.
Simultaneous use of the ARM and DSP cores across multiple
Keystone II boards is fundamental to the creation of commercially
viable HPC offerings based on Keystone technology. The nCore
BrownDwarf and HPE Moonshot systems represent two such systems.
This thesis presents a proof-of-concept implementation of matrix
multiplication (GEMM) for the BrownDwarf system. The BrownDwarf
utilizes both Keystone II and Keystone I SoCs through a
point-to-point interconnect called Hyperlink. Details of how a
novel message passing communication framework across Hyperlink
was implemented to support this complex environment are
provided.
An energy model that can be used to predict energy usage as a
function of what fraction of a particular computation is
performed on each of the available compute devices offers the
opportunity for making runtime decisions on how best to minimize
energy usage. This thesis presents a basic energy usage model
that considers rates of executions on each device and their
active and idle power usages. Using this model, it is shown that
only under certain conditions does there exist an energy-optimal
work partition that uses multiple compute devices. To validate
the model a high resolution energy measurement environment is
developed and used to gather energy measurements for a matrix
multiplication benchmark running on a variety of systems. Results
presented support the model.
Drawing on the four issues noted above and other developments
that have occurred since the Keystone II system was first
announced, the thesis concludes by making comments regarding the
future of LPSoCs as building blocks for HPC systems