1 research outputs found
Tensor Comprehensions: Framework-Agnostic High-Performance Machine Learning Abstractions
Deep learning models with convolutional and recurrent networks are now
ubiquitous and analyze massive amounts of audio, image, video, text and graph
data, with applications in automatic translation, speech-to-text, scene
understanding, ranking user preferences, ad placement, etc. Competing
frameworks for building these networks such as TensorFlow, Chainer, CNTK,
Torch/PyTorch, Caffe1/2, MXNet and Theano, explore different tradeoffs between
usability and expressiveness, research or production orientation and supported
hardware. They operate on a DAG of computational operators, wrapping
high-performance libraries such as CUDNN (for NVIDIA GPUs) or NNPACK (for
various CPUs), and automate memory allocation, synchronization, distribution.
Custom operators are needed where the computation does not fit existing
high-performance library calls, usually at a high engineering cost. This is
frequently required when new operators are invented by researchers: such
operators suffer a severe performance penalty, which limits the pace of
innovation. Furthermore, even if there is an existing runtime call these
frameworks can use, it often doesn't offer optimal performance for a user's
particular network architecture and dataset, missing optimizations between
operators as well as optimizations that can be done knowing the size and shape
of data. Our contributions include (1) a language close to the mathematics of
deep learning called Tensor Comprehensions, (2) a polyhedral Just-In-Time
compiler to convert a mathematical description of a deep learning DAG into a
CUDA kernel with delegated memory management and synchronization, also
providing optimizations such as operator fusion and specialization for specific
sizes, (3) a compilation cache populated by an autotuner. [Abstract cutoff