8 research outputs found
DREditor: An Time-efficient Approach for Building a Domain-specific Dense Retrieval Model
Deploying dense retrieval models efficiently is becoming increasingly
important across various industries. This is especially true for enterprise
search services, where customizing search engines to meet the time demands of
different enterprises in different domains is crucial. Motivated by this, we
develop a time-efficient approach called DREditor to edit the matching rule of
an off-the-shelf dense retrieval model to suit a specific domain. This is
achieved by directly calibrating the output embeddings of the model using an
efficient and effective linear mapping. This mapping is powered by an edit
operator that is obtained by solving a specially constructed least squares
problem. Compared to implicit rule modification via long-time finetuning, our
experimental results show that DREditor provides significant advantages on
different domain-specific datasets, dataset sources, retrieval models, and
computing devices. It consistently enhances time efficiency by 100-300 times
while maintaining comparable or even superior retrieval performance. In a
broader context, we take the first step to introduce a novel embedding
calibration approach for the retrieval task, filling the technical blank in the
current field of embedding calibration. This approach also paves the way for
building domain-specific dense retrieval models efficiently and inexpensively.Comment: 15 pages, 6 figures, Codes are available at
https://github.com/huangzichun/DREdito
Efficient Intra-Rack Resource Disaggregation for HPC Using Co-Packaged DWDM Photonics
The diversity of workload requirements and increasing hardware heterogeneity
in emerging high performance computing (HPC) systems motivate resource
disaggregation. Resource disaggregation allows compute and memory resources to
be allocated individually as required to each workload. However, it is unclear
how to efficiently realize this capability and cost-effectively meet the
stringent bandwidth and latency requirements of HPC applications. To that end,
we describe how modern photonics can be co-designed with modern HPC racks to
implement flexible intra-rack resource disaggregation and fully meet the bit
error rate (BER) and high escape bandwidth of all chip types in modern HPC
racks. Our photonic-based disaggregated rack provides an average application
speedup of 11% (46% maximum) for 25 CPU and 61% for 24 GPU benchmarks compared
to a similar system that instead uses modern electronic switches for
disaggregation. Using observed resource usage from a production system, we
estimate that an iso-performance intra-rack disaggregated HPC system using
photonics would require 4x fewer memory modules and 2x fewer NICs than a
non-disaggregated baseline.Comment: 15 pages, 12 figures, 4 tables. Published in IEEE Cluster 202
Automatic regenerative simulation via non-reversible simulated tempering
Simulated Tempering (ST) is an MCMC algorithm for complex target
distributions that operates on a path between the target and a more amenable
reference distribution. Crucially, if the reference enables i.i.d. sampling, ST
is regenerative and can be parallelized across independent tours. However, the
difficulty of tuning ST has hindered its widespread adoption. In this work, we
develop a simple nonreversible ST (NRST) algorithm, a general theoretical
analysis of ST, and an automated tuning procedure for ST. A core contribution
that arises from the analysis is a novel performance metric -- Tour
Effectiveness (TE) -- that controls the asymptotic variance of estimates from
ST for bounded test functions. We use the TE to show that NRST dominates its
reversible counterpart. We then develop an automated tuning procedure for NRST
algorithms that targets the TE while minimizing computational cost. This
procedure enables straightforward integration of NRST into existing
probabilistic programming languages. We provide extensive experimental evidence
that our tuning scheme improves the performance and robustness of NRST
algorithms on a diverse set of probabilistic models
λμμ μ€νλλ λ³λ ¬ μ²λ¦¬ μ΄ν리μΌμ΄μ λ€μ μν λ³λ ¬μ± κ΄λ¦¬
νμλ
Όλ¬Έ (λ°μ¬) -- μμΈλνκ΅ λνμ : 곡과λν μ κΈ°Β·μ»΄ν¨ν°κ³΅νλΆ, 2020. 8. Bernhard Egger.Running multiple parallel jobs on the same multicore machine is becoming more important to improve utilization of the given hardware resources. While co-location of parallel jobs is common practice, it still remains a challenge for current parallel runtime systems to efficiently execute multiple parallel applications simultaneously. Conventional parallelization runtimes such as OpenMP generate a fixed number of worker threads, typically as many as there are cores in the system, to utilize all physical core resources. On such runtime systems, applications may not achieve their peak performance when given full use of all physical core resources. Moreover, the OS kernel needs to manage all worker threads generated by all running parallel applications, and it may require huge management costs with an increasing number of co-located applications.
In this thesis, we focus on improving runtime performance for co-located parallel applications. To achieve this goal, the first idea of this work is to ensure spatial scheduling to execute multiple co-located parallel applications simultaneously. Spatial scheduling that provides distinct core resources for applications is considered a promising and scalable approach for executing co-located applications. Despite the growing importance of spatial scheduling, there are still two fundamental research issues with this approach. First, spatial scheduling requires a runtime support for parallel applications to run efficiently in spatial core allocation that can change at runtime. Second, the scheduler needs to assign the proper number of core resources to applications depending on the applications performance characteristics for better runtime performance.
To this end, in this thesis, we present three novel runtime-level techniques to efficiently execute co-located parallel applications with spatial scheduling. First, we present a cooperative runtime technique that provides malleable parallel execution for OpenMP parallel applications. The malleable execution means that applications can dynamically adapt their degree of parallelism to the varying core resource availability. It allows parallel applications to run efficiently at changing core resource availability compared to conventional runtime systems that do not adjust the degree of parallelism of the application. Second, this thesis introduces an analytical performance model that can estimate resource utilization and the performance of parallel programs in dependence of the provided core resources. We observe that the performance of parallel loops is typically limited by memory performance, and employ queueing theory to model the memory performance. The queueing system-based approach allows us to estimate the performance by using closed-form equations and hardware performance counters.
Third, we present a core allocation framework to manage core resources between co-located parallel applications. With analytical modeling, we observe that maximizing both CPU utilization and memory bandwidth usage can generally lead to better performance compared to conventional core allocation policies that maximize only CPU usage. The presented core allocation framework optimizes utilization of multi-dimensional resources of CPU cores and memory bandwidth on multi-socket multicore systems based on the cooperative parallel runtime support and the analytical model.λ©ν°μ½μ΄ μμ€ν
μμ μ¬λ¬ κ°μ λ³λ ¬ μ²λ¦¬ μ΄ν리μΌμ΄μ
λ€μ ν¨κ» μ€νμν€λ κ² μ μ£Όμ΄μ§ νλμ¨μ΄ μμμ ν¨μ¨μ μΌλ‘ μ¬μ©νκΈ° μν΄μ μ μ λ μ€μν΄μ§κ³ μλ€. νμ§λ§, νμ¬ λ°νμ μμ€ν
μμ μ¬λ¬ κ°μ λ³λ ¬ μ²λ¦¬ μ΄ν리μΌμ΄μ
λ€μ λμμ ν¨μ¨μ μΌλ‘ μ€νμν€λ κ²μ μ¬μ ν μ΄λ €μ΄ λ¬Έμ μ΄λ€. OpenMPμ κ°μ΄ ν΅μ μ¬ μ©λλ λ³λ ¬ν λ°νμ μμ€ν
λ€μ λͺ¨λ νλμ¨μ΄ μ½μ΄ μμμ μ¬μ©νκΈ° μν΄μ μΌλ°μ μΌλ‘ μ½μ΄ κ°μ λ§νΌ μ€λ λλ₯Ό μμ±νμ¬ μ΄ν리μΌμ΄μ
μ μ€νμν¨λ€. μ΄ λ, μ΄ν리μΌμ΄μ
μ λͺ¨λ μ½μ΄ μμμ νμ©ν λ μ€νλ € μ΅μ μ μ±λ₯μ μ»μ§ λͺ»ν μλ μμΌλ©°, μ΄μ체μ 컀λμ λΆνλ μ€νλλ μ΄ν리μΌμ΄μ
μ κ°μκ° λμ΄λ μλ‘ κ΄λ¦¬ν΄μΌ νλ μ€λ λμ κ°μκ° λμ΄λκΈ° λλ¬Έμ κ³μν΄μ 컀μ§κ² λλ€.
λ³Έ νμ λ
Όλ¬Έμμ, μ°λ¦¬λ ν¨κ» μ€νλλ λ³λ ¬ μ²λ¦¬ μ΄ν리μΌμ΄μ
λ€μ λ°νμ μ±λ₯μ λμ΄λ κ²μ μ§μ€νλ€. μ΄λ₯Ό μν΄, λ³Έ μ°κ΅¬μ ν΅μ¬ λͺ©νλ ν¨κ» μ€νλλ μ΄ν리μΌμ΄μ
λ€μκ² κ³΅κ° λΆν μ μ€μΌμ€λ§ λ°©λ²μ μ μ©νλ κ²μ΄λ€. κ° μ΄ν리 μΌμ΄μ
μκ² λ
립μ μΈ μ½μ΄ μμμ ν λΉν΄μ£Όλ κ³΅κ° λΆν μ μ€μΌμ€λ§μ μ μ λ λμ΄λλ μ½μ΄ μμμ κ°μλ₯Ό ν¨μ¨μ μΌλ‘ κ΄λ¦¬νκΈ° μν λ°©λ²μΌλ‘ λ§μ κ΄μ¬μ λ°κ³ μλ€. νμ§λ§, κ³΅κ° λΆν μ€μΌμ€λ§ λ°©λ²μ ν΅ν΄ μ΄ν리μΌμ΄μ
μ μ€νμν€λ κ²μ λ κ°μ§ μ°κ΅¬ κ³Όμ λ₯Ό κ°μ§κ³ μλ€. λ¨Όμ , κ° μ΄ν리μΌμ΄μ
μ κ°λ³μ μΈ μ½μ΄ μμ μμμ ν¨μ¨μ μΌλ‘ μ€νλκΈ° μν λ°νμ κΈ°μ μ νμλ‘ νκ³ , μ€μΌμ€λ¬λ μ΄ν리μΌμ΄μ
λ€μ μ±λ₯ νΉμ±μ κ³ λ €ν΄μ λ°νμ μ±λ₯μ λμΌ μ μλλ‘ μ λΉν μμ μ½μ΄ μμμ μ 곡ν΄μΌνλ€.
μ΄ νμ λ
Όλ¬Έμμ, μ°λ¦¬λ ν¨κ» μ€νλλ λ³λ ¬ μ²λ¦¬ μ΄ν리μΌμ΄μ
λ€μ κ³΅κ° λΆ ν μ€μΌμ€λ§μ ν΅ν΄μ ν¨μ¨μ μΌλ‘ μ€νμν€κΈ° μν μΈκ°μ§ λ°νμ μμ€ν
κΈ°μ μ μκ°νλ€. λ¨Όμ μ°λ¦¬λ νλμ μΈ λ°νμ μμ€ν
μ΄λΌλ κΈ°μ μ μκ°νλλ°, μ΄λ OpenMP λ³λ ¬ μ²λ¦¬ μ΄ν리μΌμ΄μ
λ€μκ² μ μ°νκ³ ν¨μ¨μ μΈ μ€ν νκ²½μ μ 곡νλ€. μ΄ κΈ°μ μ 곡μ λ©λͺ¨λ¦¬ λ³λ ¬ μ€νμ λ΄μ¬λμ΄ μλ νΉμ±μ νμ©νμ¬ λ³λ ¬μ²λ¦¬ νλ‘κ·Έλ¨λ€μ΄ λ³ννλ μ½μ΄ μμμ λ§μΆμ΄ λ³λ ¬μ±μ μ λλ₯Ό λμ μΌλ‘ μ‘°μ ν μ μλλ‘ ν΄μ€λ€. μ΄λ¬ν μ μ°ν μ€ν λͺ¨λΈμ λ³λ ¬ μ΄ν리μΌμ΄μ
λ€μ΄ μ¬μ© κ°λ₯ν μ½μ΄ μμμ΄ λμ μΌλ‘ λ³ννλ νκ²½μμ μ΄ν리μΌμ΄μ
μ μ€λ λ μμ€ λ³λ ¬μ±μ λ€λ£¨μ§ λͺ»νλ κΈ°μ‘΄ λ°νμ μμ€ν
λ€μ λΉν΄μ λ ν¨μ¨μ μΌλ‘ μ€νλ μ μλλ‘ ν΄μ€λ€.
λλ²μ§Έλ‘, λ³Έ λ
Όλ¬Έμ μ¬μ©λλ μ½μ΄ μμμ λ°λ₯Έ λ³λ ¬μ²λ¦¬ νλ‘κ·Έλ¨μ μ±λ₯ λ° μμ νμ©λλ₯Ό μμΈ‘ν μ μλλ‘ ν΄μ£Όλ λΆμμ μ±λ₯ λͺ¨λΈμ μκ°νλ€. λ³λ ¬ μ²λ¦¬ μ½λμ μ±λ₯ νμ₯μ±μ΄ μΌλ°μ μΌλ‘ λ©λͺ¨λ¦¬ μ±λ₯μ μ’μ°λλ€λ κ΄μ°°μ κΈ°μ΄νμ¬, μ μλ ν΄μ λͺ¨λΈμ νμ μ΄λ‘ μ νμ©νμ¬ λ©λͺ¨λ¦¬ μμ€ν
μ μ±λ₯ μ 보λ€μ κ³μ°νλ€. μ΄ νμ μμ€ν
μ κΈ°λ°ν λ°©λ²μ μ μ©ν μ±λ₯ μ 보λ€μ μμμ ν΅ν΄ ν¨μ¨μ μΌλ‘ κ³μ°ν μ μλλ‘ νλ©° μμ© μμ€ν
μμ μ 곡νλ νλμ¨μ΄ μ±λ₯ μΉ΄μ΄ν°λ§μ μꡬ νκΈ° λλ¬Έμ νμ© κ°λ₯μ± λν λλ€.
λ§μ§λ§μΌλ‘, λ³Έ λ
Όλ¬Έμ λμμ μ€νλλ λ³λ ¬ μ²λ¦¬ μ΄ν리μΌμ΄μ
λ€ μ¬μ΄μμ μ½μ΄ μμμ ν λΉν΄μ£Όλ νλ μμν¬λ₯Ό μκ°νλ€. μ μλ νλ μμν¬λ λμμ λ μνλ λ³λ ¬ μ²λ¦¬ μ΄ν리μΌμ΄μ
μ λ³λ ¬μ± λ° μ½μ΄ μμμ κ΄λ¦¬νμ¬ λ©ν° μμΌ λ©ν°μ½μ΄ μμ€ν
μμ CPU μμ λ° λ©λͺ¨λ¦¬ λμν μμ νμ©λλ₯Ό λμμ μ΅μ ννλ€. ν΄μμ μΈ λͺ¨λΈλ§κ³Ό μ μλ μ½μ΄ ν λΉ νλ μμν¬μ μ±λ₯ νκ°λ₯Ό ν΅ν΄μ, μ°λ¦¬κ° μ μνλ μ μ±
μ΄ μΌλ°μ μΈ κ²½μ°μ CPU μμμ νμ©λλ§μ μ΅μ ννλ λ°©λ²μ λΉν΄μ ν¨κ» λμνλ μ΄ν리μΌμ΄μ
λ€μ μ€νμκ°μ κ°μμν¬ μ μμμ 보μ¬μ€λ€.1 Introduction 1
1.1 Motivation 1
1.2 Background 5
1.2.1 The OpenMP Runtime System 5
1.2.2 Target Multi-Socket Multicore Systems 7
1.3 Contributions 8
1.3.1 Cooperative Runtime Systems 9
1.3.2 Performance Modeling 9
1.3.3 Parallelism Management 10
1.4 Related Work 11
1.4.1 Cooperative Runtime Systems 11
1.4.2 Performance Modeling 12
1.4.3 Parallelism Management 14
1.5 Organization of this Thesis 15
2 Dynamic Spatial Scheduling with Cooperative Runtime Systems 17
2.1 Overview 17
2.2 Malleable Workloads 19
2.3 Cooperative OpenMP Runtime System 21
2.3.1 Cooperative User-Level Tasking 22
2.3.2 Cooperative Dynamic Loop Scheduling 27
2.4 Experimental Results 30
2.4.1 Standalone Application Performance 30
2.4.2 Performance in Spatial Core Allocation 33
2.5 Discussion 35
2.5.1 Contributions 35
2.5.2 Limitations and Future Work 36
2.5.3 Summary 37
3 Performance Modeling of Parallel Loops using Queueing Systems 38
3.1 Overview 38
3.2 Background 41
3.2.1 Queueing Models 41
3.2.2 Insights on Performance Modeling of Parallel Loops 43
3.2.3 Performance Analysis 46
3.3 Queueing Systems for Multi-Socket Multicores 54
3.3.1 Hierarchical Queueing Systems 54
3.3.2 Computingthe Parameter Values 60
3.4 The Speedup Prediction Model 63
3.4.1 The Speedup Model 63
3.4.2 Implementation 64
3.5 Evaluation 65
3.5.1 64-core AMD Opteron Platform 66
3.5.2 72-core Intel Xeon Platform 68
3.6 Discussion 70
3.6.1 Applicability of the Model 70
3.6.2 Limitations of the Model 72
3.6.3 Summary 73
4 Maximizing System Utilization via Parallelism Management 74
4.1 Overview 74
4.2 Background 76
4.2.1 Modeling Performance Metrics 76
4.2.2 Our Resource Management Policy 79
4.3 NuPoCo: Parallelism Management for Co-Located Parallel Loops 82
4.3.1 Online Performance Model 82
4.3.2 Managing Parallelism 86
4.4 Evaluation of NuPoCo 90
4.4.1 Evaluation Scenario 1 90
4.4.2 Evaluation Scenario 2 98
4.5 MOCA: An Evolutionary Approach to Core Allocation 103
4.5.1 Evolutionary Core Allocation 104
4.5.2 Model-Based Allocation 106
4.6 Evaluation of MOCA 113
4.7 Discussion 118
4.7.1 Contributions and Limitations 118
4.7.2 Summary 119
5 Conclusion and Future Work 120
5.1 Conclusion 120
5.2 Future work 122
5.2.1 Improving Multi-Objective Core Allocation 122
5.2.2 Co-Scheduling of Parallel Jobs for HPC Systems 123
A Additional Experiments for the Performance Model 124
A.1 Memory Access Distribution and Poisson Distribution 124
A.1.1 Memory Access Distribution 124
A.1.2 Kolmogorov Smirnov Test 127
A.2 Additional Performance Modeling Results 134
A.2.1 Results with Intel Hyperthreading 134
A.2.2 Results with Cooperative User-Level Tasking 134
A.2.3 Results with Other Loop Schedulers 138
A.2.4 Results with Different Number of Memory Nodes 138
B Other Research Contributions of the Author 141
B.1 Compiler and Runtime Support for Integrated CPU-GPU Systems 141
B.2 Modeling NUMA Architectures with Stochastic Tool 143
B.3 Runtime Environment for a Manycore Architecture 143
μ΄λ‘ 159
Acknowledgements 161Docto
Machine learning-based performance analytics for high-performance computing systems
High-performance Computing (HPC) systems play pivotal roles in societal and scientific advancements, executing up to quintillions of calculations every second. As we shift towards exascale computing and beyond, modern HPC systems emphasize resource sharing, where various applications share processors, memory, networks, and other components. While this sharing enhances power efficiency, it complicates performance prediction and introduces significant variations in application running times, affecting overall system efficiency and operational costs.
HPC systems utilize monitoring frameworks that gather numerical telemetry data on resource usage to track operational status. Given the massive complexity and volume of this data, manual analysis is often daunting and inefficient. Machine learning (ML) techniques offer automated performance anomaly diagnosis, but the transition from successful research outcomes to production-scale deployment encounters two critical obstacles. First, the scarcity of labeled training data (i.e., identifying healthy and anomalous runs) in telemetry datasets makes it hard to train these ML systems effectively. Second, runtime analysis, required for providing timely detection and diagnosis of performance anomalies, demands seamless integration of ML-based methods with the monitoring frameworks.
This thesis claims that ML-based performance analytics frameworks that leverage a limited amount of labeled data and ensure runtime analysis can achieve sufficient anomaly diagnosis performance for production HPC systems. To support this claim, we undertake ML-based performance analytics on two fronts. First, we design and develop novel frameworks for anomaly diagnosis that leverage semi-supervised or unsupervised learning techniques to reduce the need for extensive labeled data. Second, we design a simple yet adaptable architecture to enable deployment and demonstrate that these frameworks are feasible for runtime analysis.
This thesis makes the following specific contributions: First, we design a semi-supervised anomaly diagnosis framework, Proctor, which operates with hundreds of labeled samples (in contrast to tens of thousands) and a vast number of unlabeled samples. We show that Proctor outperforms the fully supervised baseline by up to 11% in F1-score for diagnosing anomalies when there are approximately 30 labeled samples. We then reframe the problem and introduce ALBADRoss to determine which samples should be labeled by experts to maximize the model performance using active learning. On a production HPC dataset, ALBADRoss achieves a 0.95 F1-score (the same score that a fully-supervised framework achieved) and near-zero false alarm rate using 24x fewer labeled samples. Finally, with Prodigy, we solve the anomaly detection problem but with a focus on deployment. Prodigy is designed for detecting performance anomalies on compute nodes using unsupervised learning. Our framework achieves a 0.95 F1-score in detecting anomalies on a production HPC system telemetry dataset. We also design a simple and adaptable software architecture and deploy it on a 1488-node production HPC system, detecting real-world performance anomalies with 88% accuracy
Supercomputing Frontiers
This open access book constitutes the refereed proceedings of the 6th Asian Supercomputing Conference, SCFA 2020, which was planned to be held in February 2020, but unfortunately, the physical conference was cancelled due to the COVID-19 pandemic. The 8 full papers presented in this book were carefully reviewed and selected from 22 submissions. They cover a range of topics including file systems, memory hierarchy, HPC cloud platform, container image configuration workflow, large-scale applications, and scheduling