283 research outputs found
Enabling preemptive multiprogramming on GPUs
GPUs are being increasingly adopted as compute accelerators in many domains, spanning environments from mobile systems to cloud computing. These systems are usually running multiple applications, from one or several users. However GPUs do not provide the support for resource sharing traditionally expected in these scenarios. Thus, such systems are unable to provide key multiprogrammed workload requirements, such as responsiveness, fairness or quality of service. In this paper, we propose a set of hardware extensions that allow GPUs to efficiently support multiprogrammed GPU workloads. We argue for preemptive multitasking and design two preemption mechanisms that can be used to implement GPU scheduling policies. We extend the architecture to allow concurrent execution of GPU kernels from different user processes and implement a scheduling policy that dynamically distributes the GPU cores among concurrently running kernels, according to their priorities. We extend the NVIDIA GK110 (Kepler) like GPU architecture with our proposals and evaluate them on a set of multiprogrammed workloads with up to eight concurrent processes. Our proposals improve execution time of high-priority processes by 15.6x, the average application turnaround time between 1.5x to 2x, and system fairness up to 3.4x.We would like to thank the anonymous reviewers, Alexan-
der Veidenbaum, Carlos Villavieja, Lluis Vilanova, Lluc Al-
varez, and Marc Jorda on their comments and help improving
our work and this paper. This work is supported by Euro-
pean Commission through TERAFLUX (FP7-249013), Mont-
Blanc (FP7-288777), and RoMoL (GA-321253) projects,
NVIDIA through the CUDA Center of Excellence program,
Spanish Government through Programa Severo Ochoa (SEV-2011-0067) and Spanish Ministry of Science and Technology
through TIN2007-60625 and TIN2012-34557 projects.Peer ReviewedPostprint (authorโs final draft
A C-DAG task model for scheduling complex real-time tasks on heterogeneous platforms: preemption matters
Recent commercial hardware platforms for embedded real-time systems feature
heterogeneous processing units and computing accelerators on the same
System-on-Chip. When designing complex real-time application for such
architectures, the designer needs to make a number of difficult choices: on
which processor should a certain task be implemented? Should a component be
implemented in parallel or sequentially? These choices may have a great impact
on feasibility, as the difference in the processor internal architectures
impact on the tasks' execution time and preemption cost. To help the designer
explore the wide space of design choices and tune the scheduling parameters, in
this paper we propose a novel real-time application model, called C-DAG,
specifically conceived for heterogeneous platforms. A C-DAG allows to specify
alternative implementations of the same component of an application for
different processing engines to be selected off-line, as well as conditional
branches to model if-then-else statements to be selected at run-time. We also
propose a schedulability analysis for the C-DAG model and a heuristic
allocation algorithm so that all deadlines are respected. Our analysis takes
into account the cost of preempting a task, which can be non-negligible on
certain processors. We demonstrate the effectiveness of our approach on a large
set of synthetic experiments by comparing with state of the art algorithms in
the literature
Tasks Fairness Scheduler for GPU
Nowadays GPU clusters are available in almost every data processing center. Their GPUs are typically shared by different applications that might have different processing needs and/or different levels of priority. As current GPUs do not support hardware-based preemption mechanisms, it is not possible to ensure the required Quality of Service (QoS) when application kernels are offloaded to devices.
In this work, we present an efficient software preemption mechanism with low overhead that evicts and relaunches GPU kernels to provide support to different preemptive scheduling policies. We also propose a new fairness-based scheduler named Fair and Responsive Scheduler, (FRS), that takes into account the current value of the kernels slowdown to both select the new kernel to be launched and establish the time interval it is going to run (quantum).Universidad de Mรกlaga. Campus de Excelencia Internacional Andalucรญa Tech
๋ฉํฐ ํ์คํน ํ๊ฒฝ์์ GPU๋ฅผ ์ฌ์ฉํ ๋ฒ์ฉ์ ๊ณ์ฐ ์์ฉ์ ํจ์จ์ ์ธ ์์คํ ์์ ํ์ฉ์ ์ํ GPU ์์คํ ์ต์ ํ
ํ์๋
ผ๋ฌธ (๋ฐ์ฌ) -- ์์ธ๋ํ๊ต ๋ํ์ : ๊ณต๊ณผ๋ํ ์ ๊ธฐยท์ปดํจํฐ๊ณตํ๋ถ, 2020. 8. ์ผํ์.Recently, General Purpose GPU (GPGPU) applications are playing key roles in many different research fields, such as high-performance computing (HPC) and deep learning (DL). The common feature exists in these applications is that all of them require massive computation power, which follows the high parallelism characteristics of the graphics processing unit (GPU). However, because of the resource usage pattern of each GPGPU application varies, a single application cannot fully exploit the GPU systems resources to achieve the best performance of the GPU since the GPU system is designed to provide system-level fairness to all applications instead of optimizing for a specific type. GPU multitasking can address the issue by co-locating multiple kernels with diverse resource usage patterns to share the GPU resource in parallel. However, the current GPU mul- titasking scheme focuses just on co-launching the kernels rather than making them execute more efficiently. Besides, the current GPU multitasking scheme is not open-sourced, which makes it more difficult to be optimized, since the GPGPU applications and the GPU system are unaware of the feature of each other. In this dissertation, we claim that using the support from framework between the GPU system and the GPGPU applications without modifying the application can yield better performance. We design and implement the frame- work while addressing two issues in GPGPU applications. First, we introduce a GPU memory checkpointing approach between the host memory and the device memory to address the problem that GPU memory cannot be over-subscripted in a multitasking environment. Second, we present a fine-grained GPU kernel management scheme to avoid the GPU resource under-utilization problem in a
i
multitasking environment. We implement and evaluate our schemes on a real GPU system. The experimental results show that our proposed approaches can solve the problems related to GPGPU applications than the existing approaches while delivering better performance.์ต๊ทผ ๋ฒ์ฉ GPU (GPGPU) ์์ฉ ํ๋ก๊ทธ๋จ์ ๊ณ ์ฑ๋ฅ ์ปดํจํ
(HPC) ๋ฐ ๋ฅ ๋ฌ๋ (DL)๊ณผ ๊ฐ์ ๋ค์ํ ์ฐ๊ตฌ ๋ถ์ผ์์ ํต์ฌ์ ์ธ ์ญํ ์ ์ํํ๊ณ ์๋ค. ์ด๋ฌํ ์ ์ฉ ๋ถ์ผ์ ๊ณตํต์ ์ธ ํน์ฑ์ ๊ฑฐ๋ํ ๊ณ์ฐ ์ฑ๋ฅ์ด ํ์ํ ๊ฒ์ด๋ฉฐ ๊ทธ๋ํฝ ์ฒ๋ฆฌ ์ฅ์น (GPU)์ ๋์ ๋ณ๋ ฌ ์ฒ๋ฆฌ ํน์ฑ๊ณผ ๋งค์ฐ ์ ํฉํ๋ค. ๊ทธ๋ฌ๋ GPU ์์คํ
์ ํน์ ์ ํ์ ์์ฉ ํ๋ก๊ทธ๋จ์ ์ต์ ํํ๋ ๋์ ๋ชจ๋ ์์ฉ ํ๋ก๊ทธ๋จ์ ์์คํ
์์ค์ ๊ณต์ ์ฑ์ ์ ๊ณตํ๋๋ก ์ค๊ณ๋์ด ์์ผ๋ฉฐ ๊ฐ GPGPU ์์ฉ ํ๋ก๊ทธ๋จ์ ์์ ์ฌ์ฉ ํจํด์ด ๋ค์ํ๊ธฐ ๋๋ฌธ์ ๋จ์ผ ์์ฉ ํ๋ก๊ทธ๋จ์ด GPU ์์คํ
์ ๋ฆฌ์์ค๋ฅผ ์์ ํ ํ์ฉํ์ฌ GPU์ ์ต๊ณ ์ฑ๋ฅ์ ๋ฌ์ฑ ํ ์๋ ์๋ค.
๋ฐ๋ผ์ GPU ๋ฉํฐ ํ์คํน์ ๋ค์ํ ๋ฆฌ์์ค ์ฌ์ฉ ํจํด์ ๊ฐ์ง ์ฌ๋ฌ ์์ฉ ํ๋ก๊ทธ ๋จ์ ํจ๊ป ๋ฐฐ์นํ์ฌ GPU ๋ฆฌ์์ค๋ฅผ ๊ณต์ ํจ์ผ๋ก์จ GPU ์์ ์ฌ์ฉ๋ฅ ์ ํ ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ ์ ์๋ค. ๊ทธ๋ฌ๋ ๊ธฐ์กด GPU ๋ฉํฐ ํ์คํน ๊ธฐ์ ์ ์์ ์ฌ์ฉ๋ฅ ๊ด์ ์์ ์ ์ฉ ํ๋ก๊ทธ๋จ์ ํจ์จ์ ์ธ ์คํ๋ณด๋ค ๊ณต๋์ผ๋ก ์คํํ๋ ๋ฐ ์ค์ ์ ๋๋ค. ๋ํ ํ์ฌ GPU ๋ฉํฐ ํ์คํน ๊ธฐ์ ์ ์คํ ์์ค๊ฐ ์๋๋ฏ๋ก ์์ฉ ํ๋ก๊ทธ๋จ๊ณผ GPU ์์คํ
์ด ์๋ก์ ๊ธฐ๋ฅ์ ์ธ์ํ์ง ๋ชปํ๊ธฐ ๋๋ฌธ์ ์ต์ ํํ๊ธฐ๊ฐ ๋ ์ด๋ ค์ธ ์๋ ์๋ค.
๋ณธ ๋
ผ๋ฌธ์์๋ ์์ฉ ํ๋ก๊ทธ๋จ์ ์์ ์์ด GPU ์์คํ
๊ณผ GPGPU ์์ฉ ์ฌ ์ด์ ํ๋ ์์ํฌ๋ฅผ ํตํด ์ฌ์ฉํ๋ฉด ๋ณด๋ค ๋์ ์์ฉ์ฑ๋ฅ๊ณผ ์์ ์ฌ์ฉ์ ๋ณด์ผ ์ ์์์ ์ฆ๋ช
ํ๊ณ ์ ํ๋ค. ๊ทธ๋ฌ๊ธฐ ์ํด GPU ํ์คํฌ ๊ด๋ฆฌ ํ๋ ์์ํฌ๋ฅผ ๊ฐ๋ฐํ์ฌ GPU ๋ฉํฐ ํ์คํน ํ๊ฒฝ์์ ๋ฐ์ํ๋ ๋ ๊ฐ์ง ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ์๋ค. ์ฒซ์งธ, ๋ฉํฐ ํ ์คํน ํ๊ฒฝ์์ GPU ๋ฉ๋ชจ๋ฆฌ ์ด๊ณผ ํ ๋นํ ์ ์๋ ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ธฐ ์ํด ํธ์คํธ ๋ฉ๋ชจ๋ฆฌ์ ๋๋ฐ์ด์ค ๋ฉ๋ชจ๋ฆฌ์ ์ฒดํฌํฌ์ธํธ ๋ฐฉ์์ ๋์
ํ์๋ค. ๋์งธ, ๋ฉํฐ ํ์คํน ํ ๊ฒฝ์์ GPU ์์ ์ฌ์ฉ์จ ์ ํ ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ธฐ ์ํด ๋์ฑ ์ธ๋ถํ ๋ GPU ์ปค๋ ๊ด๋ฆฌ ์์คํ
์ ์ ์ํ์๋ค.
๋ณธ ๋
ผ๋ฌธ์์๋ ์ ์ํ ๋ฐฉ๋ฒ๋ค์ ํจ๊ณผ๋ฅผ ์ฆ๋ช
ํ๊ธฐ ์ํด ์ค์ GPU ์์คํ
์
92
๊ตฌํํ๊ณ ๊ทธ ์ฑ๋ฅ์ ํ๊ฐํ์๋ค. ์ ์ํ ์ ๊ทผ๋ฐฉ์์ด ๊ธฐ์กด ์ ๊ทผ ๋ฐฉ์๋ณด๋ค GPGPU ์์ฉ ํ๋ก๊ทธ๋จ๊ณผ ๊ด๋ จ๋ ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ ์ ์์ผ๋ฉฐ ๋ ๋์ ์ฑ๋ฅ์ ์ ๊ณตํ ์ ์์์ ํ์ธํ ์ ์์๋ค.Chapter 1 Introduction 1
1.1 Motivation 2
1.2 Contribution . 7
1.3 Outline 8
Chapter 2 Background 10
2.1 GraphicsProcessingUnit(GPU) and CUDA 10
2.2 CheckpointandRestart . 11
2.3 ResourceSharingModel. 11
2.4 CUDAContext 12
2.5 GPUThreadBlockScheduling . 13
2.6 Multi-ProcessServicewithHyper-Q 13
Chapter 3 Checkpoint based solution for GPU memory over- subscription problem 16
3.1 Motivation 16
3.2 RelatedWork. 18
3.3 DesignandImplementation . 20
3.3.1 System Design 21
3.3.2 CUDAAPIwrappingmodule 22
3.3.3 Scheduler . 28
3.4 Evaluation. 31
3.4.1 Evaluationsetup . 31
3.4.2 OverheadofFlexGPU 32
3.4.3 Performance with GPU Benchmark Suits 34
3.4.4 Performance with Real-world Workloads 36
3.4.5 Performance of workloads composed of multiple applications 39
3.5 Summary 42
Chapter 4 A Workload-aware Fine-grained Resource Manage- ment Framework for GPGPUs 43
4.1 Motivation 43
4.2 RelatedWork. 45
4.2.1 GPUresourcesharing 45
4.2.2 GPUscheduling . 46
4.3 DesignandImplementation . 47
4.3.1 SystemArchitecture . 47
4.3.2 CUDAAPIWrappingModule . 49
4.3.3 smCompactorRuntime . 50
4.3.4 ImplementationDetails . 57
4.4 Analysis on the relation between performance and workload usage pattern 60
4.4.1 WorkloadDefinition . 60
4.4.2 Analysisonperformancesaturation 60
4.4.3 Predict the necessary SMs and thread blocks for best performance . 64
4.5 Evaluation. 69
4.5.1 EvaluationMethodology. 70
4.5.2 OverheadofsmCompactor . 71
4.5.3 Performance with Different Thread Block Counts on Dif- ferentNumberofSMs 72
4.5.4 Performance with Concurrent Kernel and Resource Sharing 74
4.6 Summary . 79
Chapter 5 Conclusion. 81
์์ฝ. 92Docto
Glider: A GPU Library Driver for Improved System Security
Legacy device drivers implement both device resource management and
isolation. This results in a large code base with a wide high-level interface
making the driver vulnerable to security attacks. This is particularly
problematic for increasingly popular accelerators like GPUs that have large,
complex drivers. We solve this problem with library drivers, a new driver
architecture. A library driver implements resource management as an untrusted
library in the application process address space, and implements isolation as a
kernel module that is smaller and has a narrower lower-level interface (i.e.,
closer to hardware) than a legacy driver. We articulate a set of device and
platform hardware properties that are required to retrofit a legacy driver into
a library driver. To demonstrate the feasibility and superiority of library
drivers, we present Glider, a library driver implementation for two GPUs of
popular brands, Radeon and Intel. Glider reduces the TCB size and attack
surface by about 35% and 84% respectively for a Radeon HD 6450 GPU and by about
38% and 90% respectively for an Intel Ivy Bridge GPU. Moreover, it incurs no
performance cost. Indeed, Glider outperforms a legacy driver for applications
requiring intensive interactions with the device driver, such as applications
using the OpenGL immediate mode API
- โฆ