1 research outputs found
Workload-aware Automatic Parallelization for Multi-GPU DNN Training
Deep neural networks (DNNs) have emerged as successful solutions for variety
of artificial intelligence applications, but their very large and deep models
impose high computational requirements during training. Multi-GPU
parallelization is a popular option to accelerate demanding computations in DNN
training, but most state-of-the-art multi-GPU deep learning frameworks not only
require users to have an in-depth understanding of the implementation of the
frameworks themselves, but also apply parallelization in a straight-forward way
without optimizing GPU utilization. In this work, we propose a workload-aware
auto-parallelization framework (WAP) for DNN training, where the work is
automatically distributed to multiple GPUs based on the workload
characteristics. We evaluate WAP using TensorFlow with popular DNN benchmarks
(AlexNet and VGG-16), and show competitive training throughput compared with
the state-of-the-art frameworks, and also demonstrate that WAP automatically
optimizes GPU assignment based on the workload's compute requirements, thereby
improving energy efficiency.Comment: This paper is accepted in ICASSP201