1,456 research outputs found

    A SYSTEMS ANALYSIS OF A REAL-TIME MULTIPROCESSOR MINICOMPUTER SYSTEM

    Get PDF

    Workload Schedulers - Genesis, Algorithms and Comparisons

    Get PDF
    In this article we provide brief descriptions of three classes of schedulers: Operating Systems Process Schedulers, Cluster Systems, Jobs Schedulers and Big Data Schedulers. We describe their evolution from early adoptions to modern implementations, considering both the use and features of algorithms. In summary, we discuss differences between all presented classes of schedulers and discuss their chronological development. In conclusion, we highlight similarities in the focus of scheduling strategies design, applicable to both local and distributed systems

    λ™μ‹œμ— μ‹€ν–‰λ˜λŠ” 병렬 처리 μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜λ“€μ„ μœ„ν•œ 병렬성 관리

    Get PDF
    ν•™μœ„λ…Όλ¬Έ (박사) -- μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› : κ³΅κ³ΌλŒ€ν•™ 전기·컴퓨터곡학뢀, 2020. 8. Bernhard Egger.Running multiple parallel jobs on the same multicore machine is becoming more important to improve utilization of the given hardware resources. While co-location of parallel jobs is common practice, it still remains a challenge for current parallel runtime systems to efficiently execute multiple parallel applications simultaneously. Conventional parallelization runtimes such as OpenMP generate a fixed number of worker threads, typically as many as there are cores in the system, to utilize all physical core resources. On such runtime systems, applications may not achieve their peak performance when given full use of all physical core resources. Moreover, the OS kernel needs to manage all worker threads generated by all running parallel applications, and it may require huge management costs with an increasing number of co-located applications. In this thesis, we focus on improving runtime performance for co-located parallel applications. To achieve this goal, the first idea of this work is to ensure spatial scheduling to execute multiple co-located parallel applications simultaneously. Spatial scheduling that provides distinct core resources for applications is considered a promising and scalable approach for executing co-located applications. Despite the growing importance of spatial scheduling, there are still two fundamental research issues with this approach. First, spatial scheduling requires a runtime support for parallel applications to run efficiently in spatial core allocation that can change at runtime. Second, the scheduler needs to assign the proper number of core resources to applications depending on the applications performance characteristics for better runtime performance. To this end, in this thesis, we present three novel runtime-level techniques to efficiently execute co-located parallel applications with spatial scheduling. First, we present a cooperative runtime technique that provides malleable parallel execution for OpenMP parallel applications. The malleable execution means that applications can dynamically adapt their degree of parallelism to the varying core resource availability. It allows parallel applications to run efficiently at changing core resource availability compared to conventional runtime systems that do not adjust the degree of parallelism of the application. Second, this thesis introduces an analytical performance model that can estimate resource utilization and the performance of parallel programs in dependence of the provided core resources. We observe that the performance of parallel loops is typically limited by memory performance, and employ queueing theory to model the memory performance. The queueing system-based approach allows us to estimate the performance by using closed-form equations and hardware performance counters. Third, we present a core allocation framework to manage core resources between co-located parallel applications. With analytical modeling, we observe that maximizing both CPU utilization and memory bandwidth usage can generally lead to better performance compared to conventional core allocation policies that maximize only CPU usage. The presented core allocation framework optimizes utilization of multi-dimensional resources of CPU cores and memory bandwidth on multi-socket multicore systems based on the cooperative parallel runtime support and the analytical model.λ©€ν‹°μ½”μ–΄ μ‹œμŠ€ν…œμ—μ„œ μ—¬λŸ¬ 개의 병렬 처리 μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜λ“€μ„ ν•¨κ»˜ μ‹€ν–‰μ‹œν‚€λŠ” 것 은 주어진 ν•˜λ“œμ›¨μ–΄ μžμ›μ„ 효율적으둜 μ‚¬μš©ν•˜κΈ° μœ„ν•΄μ„œ 점점 더 μ€‘μš”ν•΄μ§€κ³  μžˆλ‹€. ν•˜μ§€λ§Œ, ν˜„μž¬ λŸ°νƒ€μž„ μ‹œμŠ€ν…œμ—μ„œ μ—¬λŸ¬ 개의 병렬 처리 μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜λ“€μ„ λ™μ‹œμ— 효율적으둜 μ‹€ν–‰μ‹œν‚€λŠ” 것은 μ—¬μ „νžˆ μ–΄λ €μš΄ λ¬Έμ œμ΄λ‹€. OpenMP와 같이 톡상 사 μš©λ˜λŠ” 병렬화 λŸ°νƒ€μž„ μ‹œμŠ€ν…œλ“€μ€ λͺ¨λ“  ν•˜λ“œμ›¨μ–΄ μ½”μ–΄ μžμ›μ„ μ‚¬μš©ν•˜κΈ° μœ„ν•΄μ„œ 일반적으둜 μ½”μ–΄ 개수 만큼 μŠ€λ ˆλ“œλ₯Ό μƒμ„±ν•˜μ—¬ μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜μ„ μ‹€ν–‰μ‹œν‚¨λ‹€. 이 λ•Œ, μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜μ€ λͺ¨λ“  μ½”μ–΄ μžμ›μ„ ν™œμš©ν•  λ•Œ 였히렀 졜적의 μ„±λŠ₯을 얻지 λͺ»ν•  μˆ˜λ„ 있으며, 운영체제 μ»€λ„μ˜ λΆ€ν•˜λŠ” μ‹€ν–‰λ˜λŠ” μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜μ˜ κ°œμˆ˜κ°€ λŠ˜μ–΄λ‚  수둝 관리해야 ν•˜λŠ” μŠ€λ ˆλ“œμ˜ κ°œμˆ˜κ°€ λŠ˜μ–΄λ‚˜κΈ° λ•Œλ¬Έμ— κ³„μ†ν•΄μ„œ μ»€μ§€κ²Œ λœλ‹€. λ³Έ ν•™μœ„ λ…Όλ¬Έμ—μ„œ, μš°λ¦¬λŠ” ν•¨κ»˜ μ‹€ν–‰λ˜λŠ” 병렬 처리 μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜λ“€μ˜ λŸ°νƒ€μž„ μ„±λŠ₯을 λ†’μ΄λŠ” 것에 μ§‘μ€‘ν•œλ‹€. 이λ₯Ό μœ„ν•΄, λ³Έ μ—°κ΅¬μ˜ 핡심 λͺ©ν‘œλŠ” ν•¨κ»˜ μ‹€ν–‰λ˜λŠ” μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜λ“€μ—κ²Œ 곡간 뢄할식 μŠ€μΌ€μ€„λ§ 방법을 μ μš©ν•˜λŠ” 것이닀. 각 μ–΄ν”Œλ¦¬ μΌ€μ΄μ…˜μ—κ²Œ 독립적인 μ½”μ–΄ μžμ›μ„ ν• λ‹Ήν•΄μ£ΌλŠ” 곡간 뢄할식 μŠ€μΌ€μ€„λ§μ€ 점점 더 λŠ˜μ–΄λ‚˜λŠ” μ½”μ–΄ μžμ›μ˜ 개수λ₯Ό 효율적으둜 κ΄€λ¦¬ν•˜κΈ° μœ„ν•œ λ°©λ²•μœΌλ‘œ λ§Žμ€ 관심을 λ°›κ³  μžˆλ‹€. ν•˜μ§€λ§Œ, 곡간 λΆ„ν•  μŠ€μΌ€μ€„λ§ 방법을 톡해 μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜μ„ μ‹€ν–‰μ‹œν‚€λŠ” 것은 두 가지 연ꡬ 과제λ₯Ό 가지고 μžˆλ‹€. λ¨Όμ €, 각 μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜μ€ 가변적인 μ½”μ–΄ μžμ› μƒμ—μ„œ 효율적으둜 μ‹€ν–‰λ˜κΈ° μœ„ν•œ λŸ°νƒ€μž„ κΈ°μˆ μ„ ν•„μš”λ‘œ ν•˜κ³ , μŠ€μΌ€μ€„λŸ¬λŠ” μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜λ“€μ˜ μ„±λŠ₯ νŠΉμ„±μ„ κ³ λ €ν•΄μ„œ λŸ°νƒ€μž„ μ„±λŠ₯을 높일 수 μžˆλ„λ‘ μ λ‹Ήν•œ 수의 μ½”μ–΄ μžμ›μ„ μ œκ³΅ν•΄μ•Όν•œλ‹€. 이 ν•™μœ„ λ…Όλ¬Έμ—μ„œ, μš°λ¦¬λŠ” ν•¨κ»˜ μ‹€ν–‰λ˜λŠ” 병렬 처리 μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜λ“€μ„ 곡간 λΆ„ ν•  μŠ€μΌ€μ€„λ§μ„ ν†΅ν•΄μ„œ 효율적으둜 μ‹€ν–‰μ‹œν‚€κΈ° μœ„ν•œ 세가지 λŸ°νƒ€μž„ μ‹œμŠ€ν…œ κΈ°μˆ μ„ μ†Œκ°œν•œλ‹€. λ¨Όμ € μš°λ¦¬λŠ” ν˜‘λ™μ μΈ λŸ°νƒ€μž„ μ‹œμŠ€ν…œμ΄λΌλŠ” κΈ°μˆ μ„ μ†Œκ°œν•˜λŠ”λ°, μ΄λŠ” OpenMP 병렬 처리 μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜λ“€μ—κ²Œ μœ μ—°ν•˜κ³  효율적인 μ‹€ν–‰ ν™˜κ²½μ„ μ œκ³΅ν•œλ‹€. 이 κΈ°μˆ μ€ 곡유 λ©”λͺ¨λ¦¬ 병렬 싀행에 λ‚΄μž¬λ˜μ–΄ μžˆλŠ” νŠΉμ„±μ„ ν™œμš©ν•˜μ—¬ λ³‘λ ¬μ²˜λ¦¬ ν”„λ‘œκ·Έλž¨λ“€μ΄ λ³€ν™”ν•˜λŠ” μ½”μ–΄ μžμ›μ— λ§žμΆ”μ–΄ λ³‘λ ¬μ„±μ˜ 정도λ₯Ό λ™μ μœΌλ‘œ μ‘°μ ˆν•  수 μžˆλ„λ‘ ν•΄μ€€λ‹€. μ΄λŸ¬ν•œ μœ μ—°ν•œ μ‹€ν–‰ λͺ¨λΈμ€ 병렬 μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜λ“€μ΄ μ‚¬μš© κ°€λŠ₯ν•œ μ½”μ–΄ μžμ›μ΄ λ™μ μœΌλ‘œ λ³€ν™”ν•˜λŠ” ν™˜κ²½μ—μ„œ μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜μ˜ μŠ€λ ˆλ“œ μˆ˜μ€€ 병렬성을 닀루지 λͺ»ν•˜λŠ” κΈ°μ‘΄ λŸ°νƒ€μž„ μ‹œμŠ€ν…œλ“€μ— λΉ„ν•΄μ„œ 더 효율적으둜 싀행될 수 μžˆλ„λ‘ ν•΄μ€€λ‹€. λ‘λ²ˆμ§Έλ‘œ, λ³Έ 논문은 μ‚¬μš©λ˜λŠ” μ½”μ–΄ μžμ›μ— λ”°λ₯Έ λ³‘λ ¬μ²˜λ¦¬ ν”„λ‘œκ·Έλž¨μ˜ μ„±λŠ₯ 및 μžμ› ν™œμš©λ„λ₯Ό μ˜ˆμΈ‘ν•  수 μžˆλ„λ‘ ν•΄μ£ΌλŠ” 뢄석적 μ„±λŠ₯ λͺ¨λΈμ„ μ†Œκ°œν•œλ‹€. 병렬 처리 μ½”λ“œμ˜ μ„±λŠ₯ ν™•μž₯성이 일반적으둜 λ©”λͺ¨λ¦¬ μ„±λŠ₯에 μ’Œμš°λœλ‹€λŠ” 관찰에 κΈ°μ΄ˆν•˜μ—¬, 제 μ•ˆλœ 해석 λͺ¨λΈμ€ νμž‰ 이둠을 ν™œμš©ν•˜μ—¬ λ©”λͺ¨λ¦¬ μ‹œμŠ€ν…œμ˜ μ„±λŠ₯ 정보듀을 κ³„μ‚°ν•œλ‹€. 이 νμž‰ μ‹œμŠ€ν…œμ— κΈ°λ°˜ν•œ 방법은 μœ μš©ν•œ μ„±λŠ₯ 정보듀을 μˆ˜μ‹μ„ 톡해 효율적으둜 계산할 수 μžˆλ„λ‘ ν•˜λ©° μƒμš© μ‹œμŠ€ν…œμ—μ„œ μ œκ³΅ν•˜λŠ” ν•˜λ“œμ›¨μ–΄ μ„±λŠ₯ μΉ΄μš΄ν„°λ§Œμ„ μš”κ΅¬ ν•˜κΈ° λ•Œλ¬Έμ— ν™œμš© κ°€λŠ₯μ„± λ˜ν•œ λ†’λ‹€. λ§ˆμ§€λ§‰μœΌλ‘œ, λ³Έ 논문은 λ™μ‹œμ— μ‹€ν–‰λ˜λŠ” 병렬 처리 μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜λ“€ μ‚¬μ΄μ—μ„œ μ½”μ–΄ μžμ›μ„ ν• λ‹Ήν•΄μ£ΌλŠ” ν”„λ ˆμž„μ›Œν¬λ₯Ό μ†Œκ°œν•œλ‹€. μ œμ•ˆλœ ν”„λ ˆμž„μ›Œν¬λŠ” λ™μ‹œμ— 동 μž‘ν•˜λŠ” 병렬 처리 μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜μ˜ 병렬성 및 μ½”μ–΄ μžμ›μ„ κ΄€λ¦¬ν•˜μ—¬ λ©€ν‹° μ†ŒμΌ“ λ©€ν‹°μ½”μ–΄ μ‹œμŠ€ν…œμ—μ„œ CPU μžμ› 및 λ©”λͺ¨λ¦¬ λŒ€μ—­ν­ μžμ› ν™œμš©λ„λ₯Ό λ™μ‹œμ— 졜적 ν™”ν•œλ‹€. 해석적인 λͺ¨λΈλ§κ³Ό μ œμ•ˆλœ μ½”μ–΄ ν• λ‹Ή ν”„λ ˆμž„μ›Œν¬μ˜ μ„±λŠ₯ 평가λ₯Ό ν†΅ν•΄μ„œ, μš°λ¦¬κ°€ μ œμ•ˆν•˜λŠ” 정책이 일반적인 κ²½μš°μ— CPU μžμ›μ˜ ν™œμš©λ„λ§Œμ„ μ΅œμ ν™”ν•˜λŠ” 방법에 λΉ„ν•΄μ„œ ν•¨κ»˜ λ™μž‘ν•˜λŠ” μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜λ“€μ˜ μ‹€ν–‰μ‹œκ°„μ„ κ°μ†Œμ‹œν‚¬ 수 μžˆμŒμ„ 보여쀀닀.1 Introduction 1 1.1 Motivation 1 1.2 Background 5 1.2.1 The OpenMP Runtime System 5 1.2.2 Target Multi-Socket Multicore Systems 7 1.3 Contributions 8 1.3.1 Cooperative Runtime Systems 9 1.3.2 Performance Modeling 9 1.3.3 Parallelism Management 10 1.4 Related Work 11 1.4.1 Cooperative Runtime Systems 11 1.4.2 Performance Modeling 12 1.4.3 Parallelism Management 14 1.5 Organization of this Thesis 15 2 Dynamic Spatial Scheduling with Cooperative Runtime Systems 17 2.1 Overview 17 2.2 Malleable Workloads 19 2.3 Cooperative OpenMP Runtime System 21 2.3.1 Cooperative User-Level Tasking 22 2.3.2 Cooperative Dynamic Loop Scheduling 27 2.4 Experimental Results 30 2.4.1 Standalone Application Performance 30 2.4.2 Performance in Spatial Core Allocation 33 2.5 Discussion 35 2.5.1 Contributions 35 2.5.2 Limitations and Future Work 36 2.5.3 Summary 37 3 Performance Modeling of Parallel Loops using Queueing Systems 38 3.1 Overview 38 3.2 Background 41 3.2.1 Queueing Models 41 3.2.2 Insights on Performance Modeling of Parallel Loops 43 3.2.3 Performance Analysis 46 3.3 Queueing Systems for Multi-Socket Multicores 54 3.3.1 Hierarchical Queueing Systems 54 3.3.2 Computingthe Parameter Values 60 3.4 The Speedup Prediction Model 63 3.4.1 The Speedup Model 63 3.4.2 Implementation 64 3.5 Evaluation 65 3.5.1 64-core AMD Opteron Platform 66 3.5.2 72-core Intel Xeon Platform 68 3.6 Discussion 70 3.6.1 Applicability of the Model 70 3.6.2 Limitations of the Model 72 3.6.3 Summary 73 4 Maximizing System Utilization via Parallelism Management 74 4.1 Overview 74 4.2 Background 76 4.2.1 Modeling Performance Metrics 76 4.2.2 Our Resource Management Policy 79 4.3 NuPoCo: Parallelism Management for Co-Located Parallel Loops 82 4.3.1 Online Performance Model 82 4.3.2 Managing Parallelism 86 4.4 Evaluation of NuPoCo 90 4.4.1 Evaluation Scenario 1 90 4.4.2 Evaluation Scenario 2 98 4.5 MOCA: An Evolutionary Approach to Core Allocation 103 4.5.1 Evolutionary Core Allocation 104 4.5.2 Model-Based Allocation 106 4.6 Evaluation of MOCA 113 4.7 Discussion 118 4.7.1 Contributions and Limitations 118 4.7.2 Summary 119 5 Conclusion and Future Work 120 5.1 Conclusion 120 5.2 Future work 122 5.2.1 Improving Multi-Objective Core Allocation 122 5.2.2 Co-Scheduling of Parallel Jobs for HPC Systems 123 A Additional Experiments for the Performance Model 124 A.1 Memory Access Distribution and Poisson Distribution 124 A.1.1 Memory Access Distribution 124 A.1.2 Kolmogorov Smirnov Test 127 A.2 Additional Performance Modeling Results 134 A.2.1 Results with Intel Hyperthreading 134 A.2.2 Results with Cooperative User-Level Tasking 134 A.2.3 Results with Other Loop Schedulers 138 A.2.4 Results with Different Number of Memory Nodes 138 B Other Research Contributions of the Author 141 B.1 Compiler and Runtime Support for Integrated CPU-GPU Systems 141 B.2 Modeling NUMA Architectures with Stochastic Tool 143 B.3 Runtime Environment for a Manycore Architecture 143 초둝 159 Acknowledgements 161Docto

    Metascheduling of HPC Jobs in Day-Ahead Electricity Markets

    Full text link
    High performance grid computing is a key enabler of large scale collaborative computational science. With the promise of exascale computing, high performance grid systems are expected to incur electricity bills that grow super-linearly over time. In order to achieve cost effectiveness in these systems, it is essential for the scheduling algorithms to exploit electricity price variations, both in space and time, that are prevalent in the dynamic electricity price markets. In this paper, we present a metascheduling algorithm to optimize the placement of jobs in a compute grid which consumes electricity from the day-ahead wholesale market. We formulate the scheduling problem as a Minimum Cost Maximum Flow problem and leverage queue waiting time and electricity price predictions to accurately estimate the cost of job execution at a system. Using trace based simulation with real and synthetic workload traces, and real electricity price data sets, we demonstrate our approach on two currently operational grids, XSEDE and NorduGrid. Our experimental setup collectively constitute more than 433K processors spread across 58 compute systems in 17 geographically distributed locations. Experiments show that our approach simultaneously optimizes the total electricity cost and the average response time of the grid, without being unfair to users of the local batch systems.Comment: Appears in IEEE Transactions on Parallel and Distributed System

    Parameter dependencies for reusable performance specifications of software components

    Get PDF
    To avoid design-related perΒ­forΒ­mance problems, model-driven performance prediction methods analyse the response times, throughputs, and reΒ­source utilizations of software architectures before and during implementation. This thesis proposes new modeling languages and according model transformations, which allow a reusable description of usage profile dependencies to the performance of software components. Predictions based on this new methods can support performance-related design decisions

    D-SPACE4Cloud: A Design Tool for Big Data Applications

    Get PDF
    The last years have seen a steep rise in data generation worldwide, with the development and widespread adoption of several software projects targeting the Big Data paradigm. Many companies currently engage in Big Data analytics as part of their core business activities, nonetheless there are no tools and techniques to support the design of the underlying hardware configuration backing such systems. In particular, the focus in this report is set on Cloud deployed clusters, which represent a cost-effective alternative to on premises installations. We propose a novel tool implementing a battery of optimization and prediction techniques integrated so as to efficiently assess several alternative resource configurations, in order to determine the minimum cost cluster deployment satisfying QoS constraints. Further, the experimental campaign conducted on real systems shows the validity and relevance of the proposed method
    • …
    corecore