426 research outputs found
vDNN: Virtualized Deep Neural Networks for Scalable, Memory-Efficient Neural Network Design
The most widely used machine learning frameworks require users to carefully
tune their memory usage so that the deep neural network (DNN) fits into the
DRAM capacity of a GPU. This restriction hampers a researcher's flexibility to
study different machine learning algorithms, forcing them to either use a less
desirable network architecture or parallelize the processing across multiple
GPUs. We propose a runtime memory manager that virtualizes the memory usage of
DNNs such that both GPU and CPU memory can simultaneously be utilized for
training larger DNNs. Our virtualized DNN (vDNN) reduces the average GPU memory
usage of AlexNet by up to 89%, OverFeat by 91%, and GoogLeNet by 95%, a
significant reduction in memory requirements of DNNs. Similar experiments on
VGG-16, one of the deepest and memory hungry DNNs to date, demonstrate the
memory-efficiency of our proposal. vDNN enables VGG-16 with batch size 256
(requiring 28 GB of memory) to be trained on a single NVIDIA Titan X GPU card
containing 12 GB of memory, with 18% performance loss compared to a
hypothetical, oracular GPU with enough memory to hold the entire DNN.Comment: Published as a conference paper at the 49th IEEE/ACM International
Symposium on Microarchitecture (MICRO-49), 201
Efficient Memory Management for GPU-based Deep Learning Systems
GPU (graphics processing unit) has been used for many data-intensive
applications. Among them, deep learning systems are one of the most important
consumer systems for GPU nowadays. As deep learning applications impose deeper
and larger models in order to achieve higher accuracy, memory management
becomes an important research topic for deep learning systems, given that GPU
has limited memory size. Many approaches have been proposed towards this issue,
e.g., model compression and memory swapping. However, they either degrade the
model accuracy or require a lot of manual intervention. In this paper, we
propose two orthogonal approaches to reduce the memory cost from the system
perspective. Our approaches are transparent to the models, and thus do not
affect the model accuracy. They are achieved by exploiting the iterative nature
of the training algorithm of deep learning to derive the lifetime and
read/write order of all variables. With the lifetime semantics, we are able to
implement a memory pool with minimal fragments. However, the optimization
problem is NP-complete. We propose a heuristic algorithm that reduces up to
13.3% of memory compared with Nvidia's default memory pool with equal time
complexity. With the read/write semantics, the variables that are not in use
can be swapped out from GPU to CPU to reduce the memory footprint. We propose
multiple swapping strategies to automatically decide which variable to swap and
when to swap out (in), which reduces the memory cost by up to 34.2% without
communication overhead
Efficient Memory Management for GPU-based Deep Learning Systems
GPU (graphics processing unit) has been used for many data-intensive
applications. Among them, deep learning systems are one of the most important
consumer systems for GPU nowadays. As deep learning applications impose deeper
and larger models in order to achieve higher accuracy, memory management
becomes an important research topic for deep learning systems, given that GPU
has limited memory size. Many approaches have been proposed towards this issue,
e.g., model compression and memory swapping. However, they either degrade the
model accuracy or require a lot of manual intervention. In this paper, we
propose two orthogonal approaches to reduce the memory cost from the system
perspective. Our approaches are transparent to the models, and thus do not
affect the model accuracy. They are achieved by exploiting the iterative nature
of the training algorithm of deep learning to derive the lifetime and
read/write order of all variables. With the lifetime semantics, we are able to
implement a memory pool with minimal fragments. However, the optimization
problem is NP-complete. We propose a heuristic algorithm that reduces up to
13.3% of memory compared with Nvidia's default memory pool with equal time
complexity. With the read/write semantics, the variables that are not in use
can be swapped out from GPU to CPU to reduce the memory footprint. We propose
multiple swapping strategies to automatically decide which variable to swap and
when to swap out (in), which reduces the memory cost by up to 34.2% without
communication overhead
TrIMS: Transparent and Isolated Model Sharing for Low Latency Deep LearningInference in Function as a Service Environments
Deep neural networks (DNNs) have become core computation components within
low latency Function as a Service (FaaS) prediction pipelines: including image
recognition, object detection, natural language processing, speech synthesis,
and personalized recommendation pipelines. Cloud computing, as the de-facto
backbone of modern computing infrastructure for both enterprise and consumer
applications, has to be able to handle user-defined pipelines of diverse DNN
inference workloads while maintaining isolation and latency guarantees, and
minimizing resource waste. The current solution for guaranteeing isolation
within FaaS is suboptimal -- suffering from "cold start" latency. A major cause
of such inefficiency is the need to move large amount of model data within and
across servers. We propose TrIMS as a novel solution to address these issues.
Our proposed solution consists of a persistent model store across the GPU, CPU,
local storage, and cloud storage hierarchy, an efficient resource management
layer that provides isolation, and a succinct set of application APIs and
container technologies for easy and transparent integration with FaaS, Deep
Learning (DL) frameworks, and user code. We demonstrate our solution by
interfacing TrIMS with the Apache MXNet framework and demonstrate up to 24x
speedup in latency for image classification models and up to 210x speedup for
large models. We achieve up to 8x system throughput improvement.Comment: In Proceedings CLOUD 201
Scalable Distributed DNN Training using TensorFlow and CUDA-Aware MPI: Characterization, Designs, and Performance Evaluation
TensorFlow has been the most widely adopted Machine/Deep Learning framework.
However, little exists in the literature that provides a thorough understanding
of the capabilities which TensorFlow offers for the distributed training of
large ML/DL models that need computation and communication at scale. Most
commonly used distributed training approaches for TF can be categorized as
follows: 1) Google Remote Procedure Call (gRPC), 2) gRPC+X: X=(InfiniBand
Verbs, Message Passing Interface, and GPUDirect RDMA), and 3) No-gRPC: Baidu
Allreduce with MPI, Horovod with MPI, and Horovod with NVIDIA NCCL. In this
paper, we provide an in-depth performance characterization and analysis of
these distributed training approaches on various GPU clusters including the Piz
Daint system (6 on Top500). We perform experiments to gain novel insights along
the following vectors: 1) Application-level scalability of DNN training, 2)
Effect of Batch Size on scaling efficiency, 3) Impact of the MPI library used
for no-gRPC approaches, and 4) Type and size of DNN architectures. Based on
these experiments, we present two key insights: 1) Overall, No-gRPC designs
achieve better performance compared to gRPC-based approaches for most
configurations, and 2) The performance of No-gRPC is heavily influenced by the
gradient aggregation using Allreduce. Finally, we propose a truly CUDA-Aware
MPI Allreduce design that exploits CUDA kernels and pointer caching to perform
large reductions efficiently. Our proposed designs offer 5-17X better
performance than NCCL2 for small and medium messages, and reduces latency by
29% for large messages. The proposed optimizations help Horovod-MPI to achieve
approximately 90% scaling efficiency for ResNet-50 training on 64 GPUs.
Further, Horovod-MPI achieves 1.8X and 3.2X higher throughput than the native
gRPC method for ResNet-50 and MobileNet, respectively, on the Piz Daint
cluster.Comment: 10 pages, 9 figures, submitted to IEEE IPDPS 2019 for peer-revie
CUDA Unified Memory๋ฅผ ์ํ ๋ฐ์ดํฐ ๊ด๋ฆฌ ๋ฐ ํ๋ฆฌํ์นญ ๊ธฐ๋ฒ
ํ์๋
ผ๋ฌธ(๋ฐ์ฌ) -- ์์ธ๋ํ๊ต๋ํ์ : ๊ณต๊ณผ๋ํ ์ปดํจํฐ๊ณตํ๋ถ, 2022. 8. ์ด์ฌ์ง.Unified Memory (UM) is a component of CUDA programming model which provides a memory pool that has a single address space and can be accessed by both the host and the GPU. When UM is used, a CUDA program does not need to explicitly move data between the host and the device. It also allows GPU memory oversubscription by using CPU memory as a backing store. UM significantly lessens the burden of a programmer and provides great programmability. However, using UM solely does not guarantee good performance. To fully exploit UM and improve performance, the programmer needs to add user hints to the source code to prefetch pages that are going to be accessed during the CUDA kernel execution.
In this thesis, we propose three frameworks that exploits UM to improve the ease-of-programming while maximizing the application performance. The first framework is HUM, which hides host-to-device memory copy time of traditional CUDA program without any code modification. It overlaps the host-to-device memory copy with host computation or CUDA kernel computation by exploiting Unified Memory and fault mechanisms. The evaluation result shows that executing the applications under HUM is, on average, 1.21 times faster than executing them under original CUDA. The speedup is comparable to the average speedup 1.22 of the hand-optimized implementations for Unified Memory.
The second framework is DeepUM which exploits UM to allow GPU memory oversubscription for deep neural networks. While UM allows memory oversubscription using a page fault mechanism, page fault handling introduces enormous overhead. We use a correlation prefetching technique to solve the problem and hide the overhead. The evaluation result shows that DeepUM achieves comparable performance to the other state-of-the-art approaches. At the same time, our framework can run larger batch size that other methods fail to run.
The last framework is SnuRHAC that provides an illusion of a single GPU for the multiple GPUs in a cluster. Under SnuRHAC, a CUDA program designed to use a single GPU can utilize multiple GPUs in a cluster without any source code modification. SnuRHAC automatically distributes workload to multiple GPUs in a cluster and manages data across the nodes. To manage data efficiently, SnuRHAC extends Unified Memory and exploits its page fault mechanism. We also propose two prefetching techniques to fully exploit UM and to maximize performance. The evaluation result shows that while SnuRHAC significantly improves ease-of-programming, it shows scalable performance for the cluster environment depending on the application characteristics.Unified Memory (UM)๋ CUDA ํ๋ก๊ทธ๋๋ฐ ๋ชจ๋ธ์์ ์ ๊ณตํ๋ ๊ธฐ๋ฅ ์ค ํ๋๋ก ๋จ์ผ ๋ฉ๋ชจ๋ฆฌ ์ฃผ์ ๊ณต๊ฐ์ CPU์ GPU๊ฐ ๋์์ ์ ๊ทผํ ์ ์๋๋ก ํด์ค๋ค. ์ด์ ๋ฐ๋ผ, UM์ ์ฌ์ฉํ ๊ฒฝ์ฐ CUDA ํ๋ก๊ทธ๋จ์์ ๋ช
์์ ์ผ๋ก ํ๋ก์ธ์๊ฐ์ ๋ฐ์ดํฐ๋ฅผ ์ด๋์์ผ์ฃผ์ง ์์๋ ๋๋ค. ๋ํ, CPU ๋ฉ๋ชจ๋ฆฌ๋ฅผ backing store๋ก ์ฌ์ฉํ์ฌ GPU์ ๋ฉ๋ชจ๋ฆฌ ํฌ๊ธฐ ๋ณด๋ค ๋ ๋ง์ ์์ ๋ฉ๋ชจ๋ฆฌ๋ฅผ ํ์๋ก ํ๋ ํ๋ก๊ทธ๋จ์ ์คํํ ์ ์๋๋ก ํด์ค๋ค. ๊ฒฐ๊ณผ์ ์ผ๋ก, UM์ ํ๋ก๊ทธ๋๋จธ์ ๋ถ๋ด์ ํฌ๊ฒ ๋์ด์ฃผ๊ณ ์ฝ๊ฒ ํ๋ก๊ทธ๋๋ฐ ํ ์ ์๋๋ก ๋์์ค๋ค. ํ์ง๋ง, UM์ ์๋ ๊ทธ๋๋ก ์ฌ์ฉํ๋ ๊ฒ์ ์ฑ๋ฅ ์ธก๋ฉด์์ ์ข์ง ์๋ค. UM์ page fault mechanism์ ํตํด ๋์ํ๋๋ฐ page fault๋ฅผ ์ฒ๋ฆฌํ๊ธฐ ์ํด์๋ ๋ง์ ์ค๋ฒํค๋๊ฐ ๋ฐ์ํ๊ธฐ ๋๋ฌธ์ด๋ค. UM์ ์ฌ์ฉํ๋ฉด์ ์ต๋์ ์ฑ๋ฅ์ ์ป๊ธฐ ์ํด์๋ ํ๋ก๊ทธ๋๋จธ๊ฐ ์์ค ์ฝ๋์ ์ฌ๋ฌ ํํธ๋ ์์ผ๋ก CUDA ์ปค๋์์ ์ฌ์ฉ๋ ๋ฉ๋ชจ๋ฆฌ ์์ญ์ ๋ํ ํ๋ฆฌํ์น ๋ช
๋ น์ ์ฝ์
ํด์ฃผ์ด์ผ ํ๋ค.
๋ณธ ๋
ผ๋ฌธ์ UM์ ์ฌ์ฉํ๋ฉด์๋ ์ฌ์ด ํ๋ก๊ทธ๋๋ฐ๊ณผ ์ต๋์ ์ฑ๋ฅ์ด๋ผ๋ ๋๋ง๋ฆฌ ํ ๋ผ๋ฅผ ๋์์ ์ก๊ธฐ ์ํ ๋ฐฉ๋ฒ๋ค์ ์๊ฐํ๋ค. ์ฒซ์งธ๋ก, HUM์ ๊ธฐ์กด CUDA ํ๋ก๊ทธ๋จ์ ์์ค ์ฝ๋๋ฅผ ์์ ํ์ง ์๊ณ ํธ์คํธ์ ๋๋ฐ์ด์ค ๊ฐ์ ๋ฉ๋ชจ๋ฆฌ ์ ์ก ์๊ฐ์ ์ต์ํํ๋ค. ์ด๋ฅผ ์ํด, UM๊ณผ fault mechanism์ ์ฌ์ฉํ์ฌ ํธ์คํธ-๋๋ฐ์ด์ค ๊ฐ ๋ฉ๋ชจ๋ฆฌ ์ ์ก์ ํธ์คํธ ๊ณ์ฐ ํน์ CUDA ์ปค๋ ์คํ๊ณผ ์ค์ฒฉ์ํจ๋ค. ์คํ ๊ฒฐ๊ณผ๋ฅผ ํตํด HUM์ ํตํด ์ ํ๋ฆฌ์ผ์ด์
์ ์คํํ๋ ๊ฒ์ด ๊ทธ๋ ์ง ์๊ณ CUDA๋ง์ ์ฌ์ฉํ๋ ๊ฒ์ ๋นํด ํ๊ท 1.21๋ฐฐ ๋น ๋ฅธ ๊ฒ์ ํ์ธํ์๋ค. ๋ํ, Unified Memory๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ํ๋ก๊ทธ๋๋จธ๊ฐ ์์ค ์ฝ๋๋ฅผ ์ต์ ํํ ๊ฒ๊ณผ ์ ์ฌํ ์ฑ๋ฅ์ ๋ด๋ ๊ฒ์ ํ์ธํ์๋ค.
๋๋ฒ์งธ๋ก, DeepUM์ UM์ ํ์ฉํ์ฌ GPU์ ๋ฉ๋ชจ๋ฆฌ ํฌ๊ธฐ ๋ณด๋ค ๋ ๋ง์ ์์ ๋ฉ๋ชจ๋ฆฌ๋ฅผ ํ์๋ก ํ๋ ๋ฅ ๋ฌ๋ ๋ชจ๋ธ์ ์คํํ ์ ์๊ฒ ํ๋ค. UM์ ํตํด GPU ๋ฉ๋ชจ๋ฆฌ๋ฅผ ์ด๊ณผํด์ ์ฌ์ฉํ ๊ฒฝ์ฐ CPU์ GPU๊ฐ์ ํ์ด์ง๊ฐ ๋งค์ฐ ๋น๋ฒํ๊ฒ ์ด๋ํ๋๋ฐ, ์ด๋ ๋ง์ ์ค๋ฒํค๋๊ฐ ๋ฐ์ํ๋ค. ๋๋ฒ์งธ ๋ฐฉ๋ฒ์์๋ correlation ํ๋ฆฌํ์นญ ๊ธฐ๋ฒ์ ํตํด ์ด ์ค๋ฒํค๋๋ฅผ ์ต์ํํ๋ค. ์คํ ๊ฒฐ๊ณผ๋ฅผ ํตํด DeepUM์ ๊ธฐ์กด์ ์ฐ๊ตฌ๋ ๊ฒฐ๊ณผ๋ค๊ณผ ๋น์ทํ ์ฑ๋ฅ์ ๋ณด์ด๋ฉด์ ๋ ํฐ ๋ฐฐ์น ์ฌ์ด์ฆ ํน์ ๋ ํฐ ํ์ดํผํ๋ผ๋ฏธํฐ๋ฅผ ์ฌ์ฉํ๋ ๋ชจ๋ธ์ ์คํํ ์ ์์์ ํ์ธํ์๋ค.
๋ง์ง๋ง์ผ๋ก, SnuRHAC์ ํด๋ฌ์คํฐ์ ์ฅ์ฐฉ๋ ์ฌ๋ฌ GPU๋ฅผ ๋ง์น ํ๋์ ํตํฉ๋ GPU์ฒ๋ผ ๋ณด์ฌ์ค๋ค. ๋ฐ๋ผ์, ํ๋ก๊ทธ๋๋จธ๋ ์ฌ๋ฌ GPU๋ฅผ ๋์์ผ๋ก ํ๋ก๊ทธ๋๋ฐ ํ์ง ์๊ณ ํ๋์ ๊ฐ์ GPU๋ฅผ ๋์์ผ๋ก ํ๋ก๊ทธ๋๋ฐํ๋ฉด ํด๋ฌ์คํฐ์ ์ฅ์ฐฉ๋ ๋ชจ๋ GPU๋ฅผ ํ์ฉํ ์ ์๋ค. ์ด๋ SnuRHAC์ด Unified Memory๋ฅผ ํด๋ฌ์คํฐ ํ๊ฒฝ์์ ๋์ํ๋๋ก ํ์ฅํ๊ณ , ํ์ํ ๋ฐ์ดํฐ๋ฅผ ์๋์ผ๋ก GPU๊ฐ์ ์ ์กํ๊ณ ๊ด๋ฆฌํด์ฃผ๊ธฐ ๋๋ฌธ์ด๋ค. ๋ํ, UM์ ์ฌ์ฉํ๋ฉด์ ๋ฐ์ํ ์ ์๋ ์ค๋ฒํค๋๋ฅผ ์ต์ํํ๊ธฐ ์ํด ๋ค์ํ ํ๋ฆฌํ์นญ ๊ธฐ๋ฒ์ ์๊ฐํ๋ค. ์คํ ๊ฒฐ๊ณผ๋ฅผ ํตํด SnuRHAC์ด ์ฝ๊ฒ GPU ํด๋ฌ์คํฐ๋ฅผ ์ํ ํ๋ก๊ทธ๋๋ฐ์ ํ ์ ์๋๋ก ๋์์ค ๋ฟ๋ง ์๋๋ผ, ์ ํ๋ฆฌ์ผ์ด์
ํน์ฑ์ ๋ฐ๋ผ ์ต์ ์ ์ฑ๋ฅ์ ๋ผ ์ ์์์ ๋ณด์ธ๋ค.1 Introduction 1
2 Related Work 7
3 CUDA Unified Memory 12
4 Framework for Maximizing the Performance of Traditional CUDA Program 17
4.1 Overall Structure of HUM 17
4.2 Overlapping H2Dmemcpy and Computation 19
4.3 Data Consistency and Correctness 23
4.4 HUM Driver 25
4.5 HUM H2Dmemcpy Mechanism 26
4.6 Parallelizing Memory Copy Commands 29
4.7 Scheduling Memory Copy Commands 31
5 Framework for Running Large-scale DNNs on a Single GPU 33
5.1 Structure of DeepUM 33
5.1.1 DeepUM Runtime 34
5.1.2 DeepUM Driver 35
5.2 Correlation Prefetching for GPU Pages 36
5.2.1 Pair-based Correlation Prefetching 37
5.2.2 Correlation Prefetching in DeepUM 38
5.3 Optimizations for GPU Page Fault Handling 42
5.3.1 Page Pre-eviction 42
5.3.2 Invalidating UM Blocks of Inactive PyTorch Blocks 43
6 Framework for Virtualizing a Single Device Image for a GPU Cluster 45
6.1 Overall Structure of SnuRHAC 45
6.2 Workload Distribution 48
6.3 Cluster Unified Memory 50
6.4 Additional Optimizations 57
6.5 Prefetching 58
6.5.1 Static Prefetching 58
6.5.2 Dynamic Prefetching 61
7 Evaluation 62
7.1 Framework for Maximizing the Performance of Traditional CUDA Program 62
7.1.1 Methodology 63
7.1.2 Results 64
7.2 Framework for Running Large-scale DNNs on a Single GPU 70
7.2.1 Methodology 70
7.2.2 Comparison with Naive UM and IBM LMS 72
7.2.3 Parameters of the UM Block Correlation Table 78
7.2.4 Comparison with TensorFlow-based Approaches 79
7.3 Framework for Virtualizing Single Device Image for a GPU Cluster 81
7.3.1 Methodology 81
7.3.2 Results 84
8 Discussions and Future Work 91
9 Conclusion 93
์ด๋ก 111๋ฐ
DiviML: A Module-based Heuristic for Mapping Neural Networks onto Heterogeneous Platforms
Datacenters are increasingly becoming heterogeneous, and are starting to
include specialized hardware for networking, video processing, and especially
deep learning. To leverage the heterogeneous compute capability of modern
datacenters, we develop an approach for compiler-level partitioning of deep
neural networks (DNNs) onto multiple interconnected hardware devices. We
present a general framework for heterogeneous DNN compilation, offering
automatic partitioning and device mapping. Our scheduler integrates both an
exact solver, through a mixed integer linear programming (MILP) formulation,
and a modularity-based heuristic for scalability. Furthermore, we propose a
theoretical lower bound formula for the optimal solution, which enables the
assessment of the heuristic solutions' quality. We evaluate our scheduler in
optimizing both conventional DNNs and randomly-wired neural networks, subject
to latency and throughput constraints, on a heterogeneous system comprised of a
CPU and two distinct GPUs. Compared to na\"ively running DNNs on the fastest
GPU, he proposed framework can achieve more than 3 times lower latency
and up to 2.9 higher throughput by automatically leveraging both data
and model parallelism to deploy DNNs on our sample heterogeneous server node.
Moreover, our modularity-based "splitting" heuristic improves the solution
runtime up to 395 without noticeably sacrificing solution quality
compared to an exact MILP solution, and outperforms all other heuristics by
30-60% solution quality. Finally, our case study shows how we can extend our
framework to schedule large language models across multiple heterogeneous
servers by exploiting symmetry in the hardware setup. Our code can be easily
plugged in to existing frameworks, and is available at
https://github.com/abdelfattah-lab/diviml.Comment: accepted at ICCAD'2
- โฆ