1 research outputs found
Fireiron: A Scheduling Language for High-Performance Linear Algebra on GPUs
Achieving high-performance GPU kernels requires optimizing algorithm
implementations to the targeted GPU architecture. It is of utmost importance to
fully use the compute and memory hierarchy, as well as available specialised
hardware. Currently, vendor libraries like cuBLAS and cuDNN provide the best
performing implementations of GPU algorithms. However the task of the library
programmer is incredibly challenging: for each provided algorithm,
high-performance implementations have to be developed for all commonly used
architectures, input sizes, and different storage formats. These
implementations are generally provided as optimized assembly code because
performance-critical architectural features are only exposed at this level.
This prevents reuse between different implementations of even the same
algorithm, as simple differences can have major effects on low-level
implementation details. In this paper we introduce Fireiron, a DSL and compiler
which allows the specification of high-performance GPU implementations as
compositions of simple and reusable building blocks. We show how to use
Fireiron to optimize matrix multiplication implementations, achieving
performance matching hand-coded CUDA kernels, even when using specialised
hardware such as NIVIDA Tensor Cores, and outperforming state-of-the-art
implementations provided by cuBLAS by more than 2x