Driven by deep learning, there has been a surge of specialized processors for
matrix multiplication, referred to as TensorCore Units (TCUs). These TCUs are
capable of performing matrix multiplications on small matrices (usually 4x4 or
16x16) to accelerate the convolutional and recurrent neural networks in deep
learning workloads. In this paper we leverage NVIDIA's TCU to express both
reduction and scan with matrix multiplication and show the benefits -- in terms
of program simplicity, efficiency, and performance. Our algorithm exercises the
NVIDIA TCUs which would otherwise be idle, achieves 89%-98% of peak memory copy
bandwidth, and is orders of magnitude faster (up to 100x for reduction and 3x
for scan) than state-of-the-art methods for small segment sizes -- common in
machine learning and scientific applications. Our algorithm achieves this while
decreasing the power consumption by up to 22% for reduction and16%for scan.Comment: In Proceedings of the ACM International Conference on Supercomputing
(ICS '19