37 research outputs found

    An Intelligent Framework for Oversubscription Management in CPU-GPU Unified Memory

    Full text link
    This paper proposes a novel intelligent framework for oversubscription management in CPU-GPU UVM. We analyze the current rule-based methods of GPU memory oversubscription with unified memory, and the current learning-based methods for other computer architectural components. We then identify the performance gap between the existing rule-based methods and the theoretical upper bound. We also identify the advantages of applying machine intelligence and the limitations of the existing learning-based methods. This paper proposes a novel intelligent framework for oversubscription management in CPU-GPU UVM. It consists of an access pattern classifier followed by a pattern-specific Transformer-based model using a novel loss function aiming for reducing page thrashing. A policy engine is designed to leverage the model's result to perform accurate page prefetching and pre-eviction. We evaluate our intelligent framework on a set of 11 memory-intensive benchmarks from popular benchmark suites. Our solution outperforms the state-of-the-art (SOTA) methods for oversubscription management, reducing the number of pages thrashed by 64.4\% under 125\% memory oversubscription compared to the baseline, while the SOTA method reduces the number of pages thrashed by 17.3\%. Our solution achieves an average IPC improvement of 1.52X under 125\% memory oversubscription, and our solution achieves an average IPC improvement of 3.66X under 150\% memory oversubscription. Our solution outperforms the existing learning-based methods for page address prediction, improving top-1 accuracy by 6.45\% (up to 41.2\%) on average for a single GPGPU workload, improving top-1 accuracy by 10.2\% (up to 30.2\%) on average for multiple concurrent GPGPU workloads.Comment: arXiv admin note: text overlap with arXiv:2203.1267

    Holistic Performance Analysis and Optimization of Unified Virtual Memory

    Get PDF
    The programming difficulty of creating GPU-accelerated high performance computing (HPC) codes has been greatly reduced by the advent of Unified Memory technologies that abstract the management of physical memory away from the developer. However, these systems incur substantial overhead that paradoxically grows for codes where these technologies are most useful. While these technologies are increasingly adopted for use in modern HPC frameworks and applications, the performance cost reduces the efficiency of these systems and turns away some developers from adoption entirely. These systems are naturally difficult to optimize due to the large number of interconnected hardware and software components that must be untangled to perform thorough analysis. In this thesis, we take the first deep dive into a functional implementation of a Unified Memory system, NVIDIA UVM, to evaluate the performance and characteristics of these systems. We show specific hardware and software interactions that cause serialization between host and devices. We further provide a quantitative evaluation of fault handling for various applications under different scenarios, including prefetching and oversubscription. Through lower-level analysis, we find that the driver workload is dependent on the interactions among application access patterns, GPU hardware constraints, and Host OS components. These findings indicate that the cost of host OS components is significant and present across UM implementations. We also provide a proof-of-concept asynchronous approach to memory management in UVM that allows for reduced system overhead and improved application performance. This study provides constructive insight into future implementations and systems, such as Heterogeneous Memory Management

    The Virtual Block Interface: A Flexible Alternative to the Conventional Virtual Memory Framework

    Full text link
    Computers continue to diversify with respect to system designs, emerging memory technologies, and application memory demands. Unfortunately, continually adapting the conventional virtual memory framework to each possible system configuration is challenging, and often results in performance loss or requires non-trivial workarounds. To address these challenges, we propose a new virtual memory framework, the Virtual Block Interface (VBI). We design VBI based on the key idea that delegating memory management duties to hardware can reduce the overheads and software complexity associated with virtual memory. VBI introduces a set of variable-sized virtual blocks (VBs) to applications. Each VB is a contiguous region of the globally-visible VBI address space, and an application can allocate each semantically meaningful unit of information (e.g., a data structure) in a separate VB. VBI decouples access protection from memory allocation and address translation. While the OS controls which programs have access to which VBs, dedicated hardware in the memory controller manages the physical memory allocation and address translation of the VBs. This approach enables several architectural optimizations to (1) efficiently and flexibly cater to different and increasingly diverse system configurations, and (2) eliminate key inefficiencies of conventional virtual memory. We demonstrate the benefits of VBI with two important use cases: (1) reducing the overheads of address translation (for both native execution and virtual machine environments), as VBI reduces the number of translation requests and associated memory accesses; and (2) two heterogeneous main memory architectures, where VBI increases the effectiveness of managing fast memory regions. For both cases, VBI significanttly improves performance over conventional virtual memory

    Evaluation of Distributed Programming Models and Extensions to Task-based Runtime Systems

    Get PDF
    High Performance Computing (HPC) has always been a key foundation for scientific simulation and discovery. And more recently, deep learning models\u27 training have further accelerated the demand of computational power and lower precision arithmetic. In this era following the end of Dennard\u27s Scaling and when Moore\u27s Law seemingly still holds true to a lesser extent, it is not a coincidence that HPC systems are equipped with multi-cores CPUs and a variety of hardware accelerators that are all massively parallel. Coupling this with interconnect networks\u27 speed improvements lagging behind those of computational power increases, the current state of HPC systems is heterogeneous and extremely complex. This was heralded as a great challenge to the software stacks and their ability to extract performance from these systems, but also as a great opportunity to innovate at the programming model level to explore the different approaches and propose new solutions. With usability, portability, and performance as the main factors to consider, this dissertation first evaluates some of the widely used parallel programming models (MPI, MPI+OpenMP, and task-based runtime systems) ability to manage the load imbalance among the processes computing the LU factorization of a large dense matrix stored in the Block Low-Rank (BLR) format. Next I proposed a number of optimizations and implemented them in PaRSEC\u27s Dynamic Task Discovery (DTD) model, including user-level graph trimming and direct Application Programming Interface (API) calls to perform data broadcast operation to further extend the limit of STF model. On the other hand, the Parameterized Task Graph (PTG) approach in PaRSEC is the most scalable approach for many different applications, which I then explored the possibility of combining both the algorithmic approach of Communication-Avoiding (CA) and the communication-computation overlapping benefits provided by runtime systems using 2D five-point stencil as the test case. This broad programming models evaluation and extension work highlighted the abilities of task-based runtime system in achieving scalable performance and portability on contemporary heterogeneous HPC systems. Finally, I summarized the profiling capability of PaRSEC runtime system, and demonstrated with a use case its important role in the performance bottleneck identification leading to optimizations

    GPU accelerated path tracing of massive scenes

    Get PDF
    This article presents a solution to path tracing of massive scenes on multiple GPUs. Our approach analyzes the memory access pattern of a path tracer and defines how the scene data should be distributed across up to 16 CPUs with minimal effect on performance. The key concept is that the parts of the scene that have the highest amount of memory accesses are replicated on all GPUs. We propose two methods for maximizing the performance of path tracing when working with partially distributed scene data. Both methods work on the memory management level and therefore path tracer data structures do not have to be redesigned, making our approach applicable to other path tracers with only minor changes in their code. As a proof of concept, we have enhanced the open-source Blender Cycles path tracer. The approach was validated on scenes of sizes up to 169 GB. We show that only 1 5% of the scene data needs to be replicated to all machines for such large scenes. On smaller scenes we have verified that the performance is very close to rendering a fully replicated scene. In terms of scalability we have achieved a parallel efficiency of over 94% using up to 16 GPUs.Web of Science402art. no. 1

    λ™μ‹œμ— μ‹€ν–‰λ˜λŠ” 병렬 처리 μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜λ“€μ„ μœ„ν•œ 병렬성 관리

    Get PDF
    ν•™μœ„λ…Όλ¬Έ (박사) -- μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› : κ³΅κ³ΌλŒ€ν•™ 전기·컴퓨터곡학뢀, 2020. 8. Bernhard Egger.Running multiple parallel jobs on the same multicore machine is becoming more important to improve utilization of the given hardware resources. While co-location of parallel jobs is common practice, it still remains a challenge for current parallel runtime systems to efficiently execute multiple parallel applications simultaneously. Conventional parallelization runtimes such as OpenMP generate a fixed number of worker threads, typically as many as there are cores in the system, to utilize all physical core resources. On such runtime systems, applications may not achieve their peak performance when given full use of all physical core resources. Moreover, the OS kernel needs to manage all worker threads generated by all running parallel applications, and it may require huge management costs with an increasing number of co-located applications. In this thesis, we focus on improving runtime performance for co-located parallel applications. To achieve this goal, the first idea of this work is to ensure spatial scheduling to execute multiple co-located parallel applications simultaneously. Spatial scheduling that provides distinct core resources for applications is considered a promising and scalable approach for executing co-located applications. Despite the growing importance of spatial scheduling, there are still two fundamental research issues with this approach. First, spatial scheduling requires a runtime support for parallel applications to run efficiently in spatial core allocation that can change at runtime. Second, the scheduler needs to assign the proper number of core resources to applications depending on the applications performance characteristics for better runtime performance. To this end, in this thesis, we present three novel runtime-level techniques to efficiently execute co-located parallel applications with spatial scheduling. First, we present a cooperative runtime technique that provides malleable parallel execution for OpenMP parallel applications. The malleable execution means that applications can dynamically adapt their degree of parallelism to the varying core resource availability. It allows parallel applications to run efficiently at changing core resource availability compared to conventional runtime systems that do not adjust the degree of parallelism of the application. Second, this thesis introduces an analytical performance model that can estimate resource utilization and the performance of parallel programs in dependence of the provided core resources. We observe that the performance of parallel loops is typically limited by memory performance, and employ queueing theory to model the memory performance. The queueing system-based approach allows us to estimate the performance by using closed-form equations and hardware performance counters. Third, we present a core allocation framework to manage core resources between co-located parallel applications. With analytical modeling, we observe that maximizing both CPU utilization and memory bandwidth usage can generally lead to better performance compared to conventional core allocation policies that maximize only CPU usage. The presented core allocation framework optimizes utilization of multi-dimensional resources of CPU cores and memory bandwidth on multi-socket multicore systems based on the cooperative parallel runtime support and the analytical model.λ©€ν‹°μ½”μ–΄ μ‹œμŠ€ν…œμ—μ„œ μ—¬λŸ¬ 개의 병렬 처리 μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜λ“€μ„ ν•¨κ»˜ μ‹€ν–‰μ‹œν‚€λŠ” 것 은 주어진 ν•˜λ“œμ›¨μ–΄ μžμ›μ„ 효율적으둜 μ‚¬μš©ν•˜κΈ° μœ„ν•΄μ„œ 점점 더 μ€‘μš”ν•΄μ§€κ³  μžˆλ‹€. ν•˜μ§€λ§Œ, ν˜„μž¬ λŸ°νƒ€μž„ μ‹œμŠ€ν…œμ—μ„œ μ—¬λŸ¬ 개의 병렬 처리 μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜λ“€μ„ λ™μ‹œμ— 효율적으둜 μ‹€ν–‰μ‹œν‚€λŠ” 것은 μ—¬μ „νžˆ μ–΄λ €μš΄ λ¬Έμ œμ΄λ‹€. OpenMP와 같이 톡상 사 μš©λ˜λŠ” 병렬화 λŸ°νƒ€μž„ μ‹œμŠ€ν…œλ“€μ€ λͺ¨λ“  ν•˜λ“œμ›¨μ–΄ μ½”μ–΄ μžμ›μ„ μ‚¬μš©ν•˜κΈ° μœ„ν•΄μ„œ 일반적으둜 μ½”μ–΄ 개수 만큼 μŠ€λ ˆλ“œλ₯Ό μƒμ„±ν•˜μ—¬ μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜μ„ μ‹€ν–‰μ‹œν‚¨λ‹€. 이 λ•Œ, μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜μ€ λͺ¨λ“  μ½”μ–΄ μžμ›μ„ ν™œμš©ν•  λ•Œ 였히렀 졜적의 μ„±λŠ₯을 얻지 λͺ»ν•  μˆ˜λ„ 있으며, 운영체제 μ»€λ„μ˜ λΆ€ν•˜λŠ” μ‹€ν–‰λ˜λŠ” μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜μ˜ κ°œμˆ˜κ°€ λŠ˜μ–΄λ‚  수둝 관리해야 ν•˜λŠ” μŠ€λ ˆλ“œμ˜ κ°œμˆ˜κ°€ λŠ˜μ–΄λ‚˜κΈ° λ•Œλ¬Έμ— κ³„μ†ν•΄μ„œ μ»€μ§€κ²Œ λœλ‹€. λ³Έ ν•™μœ„ λ…Όλ¬Έμ—μ„œ, μš°λ¦¬λŠ” ν•¨κ»˜ μ‹€ν–‰λ˜λŠ” 병렬 처리 μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜λ“€μ˜ λŸ°νƒ€μž„ μ„±λŠ₯을 λ†’μ΄λŠ” 것에 μ§‘μ€‘ν•œλ‹€. 이λ₯Ό μœ„ν•΄, λ³Έ μ—°κ΅¬μ˜ 핡심 λͺ©ν‘œλŠ” ν•¨κ»˜ μ‹€ν–‰λ˜λŠ” μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜λ“€μ—κ²Œ 곡간 뢄할식 μŠ€μΌ€μ€„λ§ 방법을 μ μš©ν•˜λŠ” 것이닀. 각 μ–΄ν”Œλ¦¬ μΌ€μ΄μ…˜μ—κ²Œ 독립적인 μ½”μ–΄ μžμ›μ„ ν• λ‹Ήν•΄μ£ΌλŠ” 곡간 뢄할식 μŠ€μΌ€μ€„λ§μ€ 점점 더 λŠ˜μ–΄λ‚˜λŠ” μ½”μ–΄ μžμ›μ˜ 개수λ₯Ό 효율적으둜 κ΄€λ¦¬ν•˜κΈ° μœ„ν•œ λ°©λ²•μœΌλ‘œ λ§Žμ€ 관심을 λ°›κ³  μžˆλ‹€. ν•˜μ§€λ§Œ, 곡간 λΆ„ν•  μŠ€μΌ€μ€„λ§ 방법을 톡해 μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜μ„ μ‹€ν–‰μ‹œν‚€λŠ” 것은 두 가지 연ꡬ 과제λ₯Ό 가지고 μžˆλ‹€. λ¨Όμ €, 각 μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜μ€ 가변적인 μ½”μ–΄ μžμ› μƒμ—μ„œ 효율적으둜 μ‹€ν–‰λ˜κΈ° μœ„ν•œ λŸ°νƒ€μž„ κΈ°μˆ μ„ ν•„μš”λ‘œ ν•˜κ³ , μŠ€μΌ€μ€„λŸ¬λŠ” μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜λ“€μ˜ μ„±λŠ₯ νŠΉμ„±μ„ κ³ λ €ν•΄μ„œ λŸ°νƒ€μž„ μ„±λŠ₯을 높일 수 μžˆλ„λ‘ μ λ‹Ήν•œ 수의 μ½”μ–΄ μžμ›μ„ μ œκ³΅ν•΄μ•Όν•œλ‹€. 이 ν•™μœ„ λ…Όλ¬Έμ—μ„œ, μš°λ¦¬λŠ” ν•¨κ»˜ μ‹€ν–‰λ˜λŠ” 병렬 처리 μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜λ“€μ„ 곡간 λΆ„ ν•  μŠ€μΌ€μ€„λ§μ„ ν†΅ν•΄μ„œ 효율적으둜 μ‹€ν–‰μ‹œν‚€κΈ° μœ„ν•œ 세가지 λŸ°νƒ€μž„ μ‹œμŠ€ν…œ κΈ°μˆ μ„ μ†Œκ°œν•œλ‹€. λ¨Όμ € μš°λ¦¬λŠ” ν˜‘λ™μ μΈ λŸ°νƒ€μž„ μ‹œμŠ€ν…œμ΄λΌλŠ” κΈ°μˆ μ„ μ†Œκ°œν•˜λŠ”λ°, μ΄λŠ” OpenMP 병렬 처리 μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜λ“€μ—κ²Œ μœ μ—°ν•˜κ³  효율적인 μ‹€ν–‰ ν™˜κ²½μ„ μ œκ³΅ν•œλ‹€. 이 κΈ°μˆ μ€ 곡유 λ©”λͺ¨λ¦¬ 병렬 싀행에 λ‚΄μž¬λ˜μ–΄ μžˆλŠ” νŠΉμ„±μ„ ν™œμš©ν•˜μ—¬ λ³‘λ ¬μ²˜λ¦¬ ν”„λ‘œκ·Έλž¨λ“€μ΄ λ³€ν™”ν•˜λŠ” μ½”μ–΄ μžμ›μ— λ§žμΆ”μ–΄ λ³‘λ ¬μ„±μ˜ 정도λ₯Ό λ™μ μœΌλ‘œ μ‘°μ ˆν•  수 μžˆλ„λ‘ ν•΄μ€€λ‹€. μ΄λŸ¬ν•œ μœ μ—°ν•œ μ‹€ν–‰ λͺ¨λΈμ€ 병렬 μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜λ“€μ΄ μ‚¬μš© κ°€λŠ₯ν•œ μ½”μ–΄ μžμ›μ΄ λ™μ μœΌλ‘œ λ³€ν™”ν•˜λŠ” ν™˜κ²½μ—μ„œ μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜μ˜ μŠ€λ ˆλ“œ μˆ˜μ€€ 병렬성을 닀루지 λͺ»ν•˜λŠ” κΈ°μ‘΄ λŸ°νƒ€μž„ μ‹œμŠ€ν…œλ“€μ— λΉ„ν•΄μ„œ 더 효율적으둜 싀행될 수 μžˆλ„λ‘ ν•΄μ€€λ‹€. λ‘λ²ˆμ§Έλ‘œ, λ³Έ 논문은 μ‚¬μš©λ˜λŠ” μ½”μ–΄ μžμ›μ— λ”°λ₯Έ λ³‘λ ¬μ²˜λ¦¬ ν”„λ‘œκ·Έλž¨μ˜ μ„±λŠ₯ 및 μžμ› ν™œμš©λ„λ₯Ό μ˜ˆμΈ‘ν•  수 μžˆλ„λ‘ ν•΄μ£ΌλŠ” 뢄석적 μ„±λŠ₯ λͺ¨λΈμ„ μ†Œκ°œν•œλ‹€. 병렬 처리 μ½”λ“œμ˜ μ„±λŠ₯ ν™•μž₯성이 일반적으둜 λ©”λͺ¨λ¦¬ μ„±λŠ₯에 μ’Œμš°λœλ‹€λŠ” 관찰에 κΈ°μ΄ˆν•˜μ—¬, 제 μ•ˆλœ 해석 λͺ¨λΈμ€ νμž‰ 이둠을 ν™œμš©ν•˜μ—¬ λ©”λͺ¨λ¦¬ μ‹œμŠ€ν…œμ˜ μ„±λŠ₯ 정보듀을 κ³„μ‚°ν•œλ‹€. 이 νμž‰ μ‹œμŠ€ν…œμ— κΈ°λ°˜ν•œ 방법은 μœ μš©ν•œ μ„±λŠ₯ 정보듀을 μˆ˜μ‹μ„ 톡해 효율적으둜 계산할 수 μžˆλ„λ‘ ν•˜λ©° μƒμš© μ‹œμŠ€ν…œμ—μ„œ μ œκ³΅ν•˜λŠ” ν•˜λ“œμ›¨μ–΄ μ„±λŠ₯ μΉ΄μš΄ν„°λ§Œμ„ μš”κ΅¬ ν•˜κΈ° λ•Œλ¬Έμ— ν™œμš© κ°€λŠ₯μ„± λ˜ν•œ λ†’λ‹€. λ§ˆμ§€λ§‰μœΌλ‘œ, λ³Έ 논문은 λ™μ‹œμ— μ‹€ν–‰λ˜λŠ” 병렬 처리 μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜λ“€ μ‚¬μ΄μ—μ„œ μ½”μ–΄ μžμ›μ„ ν• λ‹Ήν•΄μ£ΌλŠ” ν”„λ ˆμž„μ›Œν¬λ₯Ό μ†Œκ°œν•œλ‹€. μ œμ•ˆλœ ν”„λ ˆμž„μ›Œν¬λŠ” λ™μ‹œμ— 동 μž‘ν•˜λŠ” 병렬 처리 μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜μ˜ 병렬성 및 μ½”μ–΄ μžμ›μ„ κ΄€λ¦¬ν•˜μ—¬ λ©€ν‹° μ†ŒμΌ“ λ©€ν‹°μ½”μ–΄ μ‹œμŠ€ν…œμ—μ„œ CPU μžμ› 및 λ©”λͺ¨λ¦¬ λŒ€μ—­ν­ μžμ› ν™œμš©λ„λ₯Ό λ™μ‹œμ— 졜적 ν™”ν•œλ‹€. 해석적인 λͺ¨λΈλ§κ³Ό μ œμ•ˆλœ μ½”μ–΄ ν• λ‹Ή ν”„λ ˆμž„μ›Œν¬μ˜ μ„±λŠ₯ 평가λ₯Ό ν†΅ν•΄μ„œ, μš°λ¦¬κ°€ μ œμ•ˆν•˜λŠ” 정책이 일반적인 κ²½μš°μ— CPU μžμ›μ˜ ν™œμš©λ„λ§Œμ„ μ΅œμ ν™”ν•˜λŠ” 방법에 λΉ„ν•΄μ„œ ν•¨κ»˜ λ™μž‘ν•˜λŠ” μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜λ“€μ˜ μ‹€ν–‰μ‹œκ°„μ„ κ°μ†Œμ‹œν‚¬ 수 μžˆμŒμ„ 보여쀀닀.1 Introduction 1 1.1 Motivation 1 1.2 Background 5 1.2.1 The OpenMP Runtime System 5 1.2.2 Target Multi-Socket Multicore Systems 7 1.3 Contributions 8 1.3.1 Cooperative Runtime Systems 9 1.3.2 Performance Modeling 9 1.3.3 Parallelism Management 10 1.4 Related Work 11 1.4.1 Cooperative Runtime Systems 11 1.4.2 Performance Modeling 12 1.4.3 Parallelism Management 14 1.5 Organization of this Thesis 15 2 Dynamic Spatial Scheduling with Cooperative Runtime Systems 17 2.1 Overview 17 2.2 Malleable Workloads 19 2.3 Cooperative OpenMP Runtime System 21 2.3.1 Cooperative User-Level Tasking 22 2.3.2 Cooperative Dynamic Loop Scheduling 27 2.4 Experimental Results 30 2.4.1 Standalone Application Performance 30 2.4.2 Performance in Spatial Core Allocation 33 2.5 Discussion 35 2.5.1 Contributions 35 2.5.2 Limitations and Future Work 36 2.5.3 Summary 37 3 Performance Modeling of Parallel Loops using Queueing Systems 38 3.1 Overview 38 3.2 Background 41 3.2.1 Queueing Models 41 3.2.2 Insights on Performance Modeling of Parallel Loops 43 3.2.3 Performance Analysis 46 3.3 Queueing Systems for Multi-Socket Multicores 54 3.3.1 Hierarchical Queueing Systems 54 3.3.2 Computingthe Parameter Values 60 3.4 The Speedup Prediction Model 63 3.4.1 The Speedup Model 63 3.4.2 Implementation 64 3.5 Evaluation 65 3.5.1 64-core AMD Opteron Platform 66 3.5.2 72-core Intel Xeon Platform 68 3.6 Discussion 70 3.6.1 Applicability of the Model 70 3.6.2 Limitations of the Model 72 3.6.3 Summary 73 4 Maximizing System Utilization via Parallelism Management 74 4.1 Overview 74 4.2 Background 76 4.2.1 Modeling Performance Metrics 76 4.2.2 Our Resource Management Policy 79 4.3 NuPoCo: Parallelism Management for Co-Located Parallel Loops 82 4.3.1 Online Performance Model 82 4.3.2 Managing Parallelism 86 4.4 Evaluation of NuPoCo 90 4.4.1 Evaluation Scenario 1 90 4.4.2 Evaluation Scenario 2 98 4.5 MOCA: An Evolutionary Approach to Core Allocation 103 4.5.1 Evolutionary Core Allocation 104 4.5.2 Model-Based Allocation 106 4.6 Evaluation of MOCA 113 4.7 Discussion 118 4.7.1 Contributions and Limitations 118 4.7.2 Summary 119 5 Conclusion and Future Work 120 5.1 Conclusion 120 5.2 Future work 122 5.2.1 Improving Multi-Objective Core Allocation 122 5.2.2 Co-Scheduling of Parallel Jobs for HPC Systems 123 A Additional Experiments for the Performance Model 124 A.1 Memory Access Distribution and Poisson Distribution 124 A.1.1 Memory Access Distribution 124 A.1.2 Kolmogorov Smirnov Test 127 A.2 Additional Performance Modeling Results 134 A.2.1 Results with Intel Hyperthreading 134 A.2.2 Results with Cooperative User-Level Tasking 134 A.2.3 Results with Other Loop Schedulers 138 A.2.4 Results with Different Number of Memory Nodes 138 B Other Research Contributions of the Author 141 B.1 Compiler and Runtime Support for Integrated CPU-GPU Systems 141 B.2 Modeling NUMA Architectures with Stochastic Tool 143 B.3 Runtime Environment for a Manycore Architecture 143 초둝 159 Acknowledgements 161Docto
    corecore