262 research outputs found
Scalable Distributed DNN Training using TensorFlow and CUDA-Aware MPI: Characterization, Designs, and Performance Evaluation
TensorFlow has been the most widely adopted Machine/Deep Learning framework.
However, little exists in the literature that provides a thorough understanding
of the capabilities which TensorFlow offers for the distributed training of
large ML/DL models that need computation and communication at scale. Most
commonly used distributed training approaches for TF can be categorized as
follows: 1) Google Remote Procedure Call (gRPC), 2) gRPC+X: X=(InfiniBand
Verbs, Message Passing Interface, and GPUDirect RDMA), and 3) No-gRPC: Baidu
Allreduce with MPI, Horovod with MPI, and Horovod with NVIDIA NCCL. In this
paper, we provide an in-depth performance characterization and analysis of
these distributed training approaches on various GPU clusters including the Piz
Daint system (6 on Top500). We perform experiments to gain novel insights along
the following vectors: 1) Application-level scalability of DNN training, 2)
Effect of Batch Size on scaling efficiency, 3) Impact of the MPI library used
for no-gRPC approaches, and 4) Type and size of DNN architectures. Based on
these experiments, we present two key insights: 1) Overall, No-gRPC designs
achieve better performance compared to gRPC-based approaches for most
configurations, and 2) The performance of No-gRPC is heavily influenced by the
gradient aggregation using Allreduce. Finally, we propose a truly CUDA-Aware
MPI Allreduce design that exploits CUDA kernels and pointer caching to perform
large reductions efficiently. Our proposed designs offer 5-17X better
performance than NCCL2 for small and medium messages, and reduces latency by
29% for large messages. The proposed optimizations help Horovod-MPI to achieve
approximately 90% scaling efficiency for ResNet-50 training on 64 GPUs.
Further, Horovod-MPI achieves 1.8X and 3.2X higher throughput than the native
gRPC method for ResNet-50 and MobileNet, respectively, on the Piz Daint
cluster.Comment: 10 pages, 9 figures, submitted to IEEE IPDPS 2019 for peer-revie
CGX: Adaptive System Support for Communication-Efficient Deep Learning
The ability to scale out training workloads has been one of the key
performance enablers of deep learning. The main scaling approach is
data-parallel GPU-based training, which has been boosted by hardware and
software support for highly efficient point-to-point communication, and in
particular via hardware bandwidth overprovisioning. Overprovisioning comes at a
cost: there is an order of magnitude price difference between "cloud-grade"
servers with such support, relative to their popular "consumer-grade"
counterparts, although single server-grade and consumer-grade GPUs can have
similar computational envelopes.
In this paper, we show that the costly hardware overprovisioning approach can
be supplanted via algorithmic and system design, and propose a framework called
CGX, which provides efficient software support for compressed communication in
ML applications, for both multi-GPU single-node training, as well as
larger-scale multi-node training. CGX is based on two technical advances:
\emph{At the system level}, it relies on a re-developed communication stack for
ML frameworks, which provides flexible, highly-efficient support for compressed
communication. \emph{At the application level}, it provides \emph{seamless,
parameter-free} integration with popular frameworks, so that end-users do not
have to modify training recipes, nor significant training code. This is
complemented by a \emph{layer-wise adaptive compression} technique which
dynamically balances compression gains with accuracy preservation. CGX
integrates with popular ML frameworks, providing up to 3X speedups for
multi-GPU nodes based on commodity hardware, and order-of-magnitude
improvements in the multi-node setting, with negligible impact on accuracy
PRISMA: a prefetching storage middleware for accelerating deep learning frameworks
Dissertação mestrado integrado em Informatics EngineeringDeep Learning (DL) is a widely used technique often applied to many domains, from computer vision to natural language processing. To avoid overfitting, DL applications have to access large amounts of data, which affects the training performance. Although significant hardware advances have already been made, current storage systems cannot keep up with the needs required by DL techniques. Considering this, multiple storage solutions have already been developed to improve the Input/Output (I/O) performance of DL training. Nevertheless, they are either specific to certain DL frameworks or present drawbacks, such as loss of accuracy. Most DL frameworks also contain internal I/O optimizations, however they cannot be easily decoupled and applied to other frameworks. Furthermore, most of these optimizations have to be manually configured or comprise greedy provisioning algorithms that waste computational resources. To address these issues, we propose PRISMA, a novel storage middleware that employs data prefetching and parallel I/O to improve DL training performance. PRISMA provides an autotuning mechanism to automatically select the optimal configuration. This mechanism was designed to achieve a good trade-off between performance and resource usage. PRISMA is framework-agnostic, meaning that it can be applied to any DL framework, and does not impact the accuracy of the training model. In addition to PRISMA, we provide a thorough study and evaluation of the TensorFlow Dataset Application Programming Interface (API), demonstrating that local DL can benefit from I/O optimization. PRISMA was integrated and evaluated with two popular DL frameworks, namely Tensor Flow and PyTorch, proving that it is successful under different I/O workloads. Experimental results demonstrate that PRISMA is the most efficient solution for the majority of the scenar ios that were studied, while for the other scenarios exhibits similar performance to built-in optimizations of TensorFlow and PyTorch.Aprendizagem Profunda (AP) é uma área bastante abrangente que é atualmente utilizada em diversos domínios, como é o caso da visão por computador e do processamento de linguagem natural. A aplicação de técnicas de AP implica o acesso a grandes quantidades de dados, o que afeta o desempenho de treino. Embora já tenham sido alcançados avanços significativos em termos de hardware, os sistemas de armazenamento atuais não conseguem acompanhar os requisitos de desempenho que os mecanismos de AP impõem. Considerando isto, foram desenvolvidas várias soluções de armazenamento com o objetivo de melhorar o desempenho de Entrada/Saída (E/S) do treino de AP. No entanto, as soluções existentes possuem certas desvantagens, nomeadamente perda de precisão do modelo de treino e o facto de serem específicas a determinadas plataformas de AP. A maioria das plataformas de AP também possuem otimizações de E/S, contudo essas otimizações não podem ser facilmente desacopladas e aplicadas a outras plataformas. Para além disto, a maioria destas otimizações tem que ser configurada manualmente ou contém algoritmos de provisionamento gananciosos, que desperdiçam recursos computacionais. Para resolver os problemas anteriormente mencionados, esta dissertação propõe o PRISMA, um middleware de armazenamento que executa pré-busca de dados e paralelismo de E/S, de forma a melhorar o desempenho de treino de AP. O PRISMA providencia um mecanismo de configuração automática para determinar uma combinação de parâmetros ótima. Este mecanismo foi desenvolvido com o objetivo de obter um bom equilíbrio entre desempenho e utilização de recursos. O PRISMA é independente da plataforma de AP e não afeta a precisão do modelo de treino. Além do PRISMA, esta dissertação providencia um estudo e uma avaliação detalhados da Interface de Programação de Aplicações (API) Dataset do TensorFlow, provando que AP local pode beneficiar de otimizações de E/S. O PRISMA foi integrado e avaliado com duas plataformas de AP amplamente utilizadas, o TensorFlow e o PyTorch, demonstrando que este middleware tem sucesso sob diferentes cargas de trabalho de E/S. Os resultados experimentais demonstram que o PRISMA é a solução mais eficiente na maioria dos cenários estudados, e possui um desempenho semelhante às otimizações internas do TensorFlow e do PyTorch.Fundação para a Ciência e a Tecnologia (FCT) - project UIDB/50014/202
A Survey and Empirical Evaluation of Parallel Deep Learning Frameworks
The field of deep learning has witnessed a remarkable shift towards extremely
compute- and memory-intensive neural networks. These newer larger models have
enabled researchers to advance state-of-the-art tools across a variety of
fields. This phenomenon has spurred the development of algorithms for
distributed training of neural networks over a larger number of hardware
accelerators. In this paper, we discuss and compare current state-of-the-art
frameworks for large scale distributed deep learning. First, we survey current
practices in distributed learning and identify the different types of
parallelism used. Then, we present empirical results comparing their
performance on large image and language training tasks. Additionally, we
address their statistical efficiency and memory consumption behavior. Based on
our results, we discuss algorithmic and implementation portions of each
framework which hinder performance
- …