1,601 research outputs found
An MPI-IO In-Memory driver for non-volatile pooled memory of the Kove XPD
Many scientific applications are limited by the performance offered by parallel file systems. SSD based burst buffers provide significant better performance than HDD backed storage but at the expense of capacity. Clearly, achieving wire-speed of the interconnect and predictable low latency I/O is the holy grail of storage. In-memory storage promises to provide optimal performance exceeding SSD based solutions. Koveยฎโs XPDยฎ offers pooled memory for cluster systems. This remote memory is asynchronously backed up to storage devices of the XPDs and considered to be non-volatile. Albeit the system offers various APIs to access this memory such as treating it as a block device, it does not allow to expose it as file system that offers POSIX or MPI-IO semantics.
In this paper, we (1) describe the XPD-MPIIO-driver which supports the scale-out architecture of the XPDs. This MPI-agnostic driver enables high-level libraries to utilize the XPDโs memory as storage. (2) A thorough performance evaluation of the XPD is conducted. This includes scale-out testing of the infrastructure and โmetadataโ operations but also performance variability.
We show that the driver and storage architecture is able to nearly saturate wire-speed of Infiniband (60+ GiB/s with 14FDR links) while providing low latency and little performance variability
๊ณ ์ฑ๋ฅ ์ปดํจํ ์์คํ ์์ ๋ฒ์คํธ ๋ฒํผ๋ฅผ ์ํ I/O ๋ถ๋ฆฌ ๊ธฐ๋ฒ์ ์ค์ฆ์ ๊ตฌํ
ํ์๋
ผ๋ฌธ(์์ฌ)--์์ธ๋ํ๊ต ๋ํ์ :๊ณต๊ณผ๋ํ ์ปดํจํฐ๊ณตํ๋ถ,2019. 8. ์ํ์.To meet the exascale I/O requirements in the High-Performance Computing (HPC), a new I/O subsystem, named Burst Buffer, based on non-volatile memory, has been developed. However, the diverse HPC workloads and the bursty I/O pattern cause severe data fragmentation to SSDs, which creates the need for expensive garbage collection (GC) and also increase the number of bytes actually written to SSD. The new multi-stream feature in SSDs offers an option to reduce the cost of garbage collection. In this paper, we leverage this multi-stream feature to group the I/O streams based on the user IDs and implement this strategy in a burst buffer we call BIOS, short for Burst Buffer with an I/O Separation scheme. Furthermore, to optimize the I/O separation scheme in burst buffer environments, we propose a stream-aware scheduling policy based on burst buffer pools in workload manager and implement the real burst buffer system, BIOS framework, by integrating the BIOS with workload manager. We evaluate the BIOS and framework with a burst buffer I/O traces from Cori Supercomputer including a diverse set of applications. We also disclose and analyze the benefits and limitations of using I/O separation scheme in HPC systems. Experimental results show that the BIOS could improve the performance by 1.44ร on average and reduce the Write Amplification Factor (WAF) by up to 1.20ร, and prove that the framework can keep on the benefits of the I/O separation scheme in the HPC environment.Abstract
Introduction 1
Background and Challenges 5
Burst Buffer 5
Write Amplification in SSDs 6
Multi-streamed SSD 7
Challenges of Multi-stream Feature in Burst Buffers 7
I/O Separation Scheme in Burst Buffer 10
Stream Allocation Criteria 10
Implementation 12
Limitations of User ID-based Stream Allocation 14
BIOS Framework 15
Support in Workload Manager 15
Burst Buffer Pools 16
Stream-Aware Scheduling Policy 18
Workflow of BIOS Framework 20
Evaluation 21
Experiment Setup 21
Evaluation with Synthetic Workload 21
Evaluation with HPC Applications 25
Evaluation with Emulated Workload 27
Evaluation with Different Striping Configuration 29
Evaluation on BIOS Framework 30
Summary and Lessons Learned 33
An I/O Separation Scheme in Burst Buffer 33
Evaluation with Synthetic Workload 33
Evaluation with HPC Applications 33
Evaluation with Emulated Workload 34
Evaluation with Striping Configurations 34
A BIOS Framework 34
Evaluation with Real Burst Buffer Environments 34
Discussion 36
Limited Number of Nodes 36
Advanced BIOS Framework 37
Related work 38
Conclusions 40
Bibliography 42
์ด๋ก 48Maste
Benefit of DDN's IME-FUSE for I/O intensive HPC applications
Many scientific applications are limited by I/O performance offered by parallel file systems on conventional storage systems. Flash- based burst buffers provide significant better performance than HDD backed storage, but at the expense of capacity. Burst buffers are consid- ered as the next step towards achieving wire-speed of interconnect and providing more predictable low latency I/O, which are the holy grail of storage. A critical evaluation of storage technology is mandatory as there is no long-term experience with performance behavior for particular applica- tions scenarios. The evaluation enables data centers choosing the right products and system architects the integration in HPC architectures. This paper investigates the native performance of DDN-IME, a flash- based burst buffer solution. Then, it takes a closer look at the IME-FUSE file systems, which uses IMEs as burst buffer and a Lustre file system as back-end. Finally, by utilizing a NetCDF benchmark, it estimates the performance benefit for climate applications
ASCR/HEP Exascale Requirements Review Report
This draft report summarizes and details the findings, results, and
recommendations derived from the ASCR/HEP Exascale Requirements Review meeting
held in June, 2015. The main conclusions are as follows. 1) Larger, more
capable computing and data facilities are needed to support HEP science goals
in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of
the demand at the 2025 timescale is at least two orders of magnitude -- and in
some cases greater -- than that available currently. 2) The growth rate of data
produced by simulations is overwhelming the current ability, of both facilities
and researchers, to store and analyze it. Additional resources and new
techniques for data analysis are urgently needed. 3) Data rates and volumes
from HEP experimental facilities are also straining the ability to store and
analyze large and complex data volumes. Appropriately configured
leadership-class facilities can play a transformational role in enabling
scientific discovery from these datasets. 4) A close integration of HPC
simulation and data analysis will aid greatly in interpreting results from HEP
experiments. Such an integration will minimize data movement and facilitate
interdependent workflows. 5) Long-range planning between HEP and ASCR will be
required to meet HEP's research needs. To best use ASCR HPC resources the
experimental HEP program needs a) an established long-term plan for access to
ASCR computational and data resources, b) an ability to map workflows onto HPC
resources, c) the ability for ASCR facilities to accommodate workflows run by
collaborations that can have thousands of individual members, d) to transition
codes to the next-generation HPC platforms that will be available at ASCR
facilities, e) to build up and train a workforce capable of developing and
using simulations and analysis to support HEP scientific research on
next-generation systems.Comment: 77 pages, 13 Figures; draft report, subject to further revisio
์ฑ๋ฅ๊ณผ ์ฉ๋ ํฅ์์ ์ํ ์ ์ธตํ ๋ฉ๋ชจ๋ฆฌ ๊ตฌ์กฐ
ํ์๋
ผ๋ฌธ (๋ฐ์ฌ)-- ์์ธ๋ํ๊ต ๋ํ์ : ์ตํฉ๊ณผํ๊ธฐ์ ๋ํ์ ์ตํฉ๊ณผํ๋ถ(์ง๋ฅํ์ตํฉ์์คํ
์ ๊ณต), 2019. 2. ์์ ํธ.The advance of DRAM manufacturing technology slows down, whereas the density and performance needs of DRAM continue to increase. This desire has motivated the industry to explore emerging Non-Volatile Memory (e.g., 3D XPoint) and the high-density DRAM (e.g., Managed DRAM Solution). Since such memory technologies increase the density at the cost of longer latency, lower bandwidth, or both, it is essential to use them with fast memory (e.g., conventional DRAM) to which hot pages are transferred at runtime. Nonetheless, we observe that page transfers to fast memory often block memory channels from servicing memory requests from applications for a long period. This in turn significantly increases the high-percentile response time of latency-sensitive applications. In this thesis, we propose a high-density managed DRAM architecture, dubbed 3D-XPath for applications demanding both low latency and high capacity for memory. 3D-XPath DRAM stacks conventional DRAM dies with high-density DRAM dies explored in this thesis and connects these DRAM dies with 3D-XPath. Especially, 3D-XPath allows unused memory channels to service memory requests from applications when primary channels supposed to handle the memory requests are blocked by page transfers at given moments, considerably increasing the high-percentile response time. This can also improve the throughput of applications frequently copying memory blocks between kernel and user memory spaces. Our evaluation shows that 3D-XPath DRAM decreases high-percentile response time of latency-sensitive applications by โผ30% while improving the throughput of an I/O-intensive applications by โผ39%, compared with DRAM without 3D-XPath.
Recent computer systems are evolving toward the integration of more CPU cores into a single socket, which require higher memory bandwidth and capacity. Increasing the number of channels per socket is a common solution to the bandwidth demand and to better utilize these increased channels, data bus width is reduced and burst length is increased. However, this longer burst length brings increased DRAM access latency. On the memory capacity side, process scaling has been the answer for decades, but cell capacitance now limits how small a cell could be. 3D stacked memory solves this problem by stacking dies on top of other dies.
We made a key observation in real multicore machine that multiple memory controllers are always not fully utilized on SPEC CPU 2006 rate benchmark. To bring these idle channels into play, we proposed memory channel sharing architecture to boost peak bandwidth of one memory channel and reduce the burst latency on 3D stacked memory. By channel sharing, the total performance on multi-programmed workloads and multi-threaded workloads improved up to respectively 4.3% and 3.6% and the average read latency reduced up to 8.22% and 10.18%.DRAM ์ ์กฐ ๊ธฐ์ ์ ๋ฐ์ ์ ์๋๊ฐ ๋๋ ค์ง๋ ๋ฐ๋ฉด DRAM์ ๋ฐ๋ ๋ฐ ์ฑ๋ฅ ์๊ตฌ๋ ๊ณ์ ์ฆ๊ฐํ๊ณ ์๋ค. ์ด๋ฌํ ์๊ตฌ๋ก ์ธํด ์๋ก์ด ๋น ํ๋ฐ์ฑ ๋ฉ๋ชจ๋ฆฌ(์: 3D-XPoint) ๋ฐ ๊ณ ๋ฐ๋ DRAM(์: Managed asymmetric latency DRAM Solution)์ด ๋ฑ์ฅํ์๋ค. ์ด๋ฌํ ๊ณ ๋ฐ๋ ๋ฉ๋ชจ๋ฆฌ ๊ธฐ์ ์ ๊ธด ๋ ์ดํด์, ๋ฎ์ ๋์ญํญ ๋๋ ๋ ๊ฐ์ง ๋ชจ๋๋ฅผ ์ฌ์ฉํ๋ ๋ฐฉ์์ผ๋ก ๋ฐ๋๋ฅผ ์ฆ๊ฐ์ํค๊ธฐ ๋๋ฌธ์ ์ฑ๋ฅ์ด ์ข์ง ์์, ํซ ํ์ด์ง๋ฅผ ๊ณ ์ ๋ฉ๋ชจ๋ฆฌ(์: ์ผ๋ฐ DRAM)๋ก ์ค์๋๋ ์ ์ฉ๋์ ๊ณ ์ ๋ฉ๋ชจ๋ฆฌ๊ฐ ๋์์ ์ฌ์ฉ๋๋ ๊ฒ์ด ์ผ๋ฐ์ ์ด๋ค. ์ด๋ฌํ ์ค์ ๊ณผ์ ์์ ๋น ๋ฅธ ๋ฉ๋ชจ๋ฆฌ๋ก์ ํ์ด์ง ์ ์ก์ด ์ผ๋ฐ์ ์ธ ์์ฉํ๋ก๊ทธ๋จ์ ๋ฉ๋ชจ๋ฆฌ ์์ฒญ์ ์ค๋ซ๋์ ์ฒ๋ฆฌํ์ง ๋ชปํ๋๋ก ํ๊ธฐ ๋๋ฌธ์, ๋๊ธฐ ์๊ฐ์ ๋ฏผ๊ฐํ ์์ฉ ํ๋ก๊ทธ๋จ์ ๋ฐฑ๋ถ์ ์๋ต ์๊ฐ์ ํฌ๊ฒ ์ฆ๊ฐ์์ผ, ์๋ต ์๊ฐ์ ํ์ค ํธ์ฐจ๋ฅผ ์ฆ๊ฐ์ํจ๋ค. ์ด๋ฌํ ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ธฐ ์ํด ๋ณธ ํ์ ๋
ผ๋ฌธ์์๋ ์ ์ง์ฐ์๊ฐ ๋ฐ ๊ณ ์ฉ๋ ๋ฉ๋ชจ๋ฆฌ๋ฅผ ์๊ตฌํ๋ ์ ํ๋ฆฌ์ผ์ด์
์ ์ํด 3D-XPath, ์ฆ ๊ณ ๋ฐ๋ ๊ด๋ฆฌ DRAM ์ํคํ
์ฒ๋ฅผ ์ ์ํ๋ค. ์ด๋ฌํ 3D-ํ์๋ฅผ ์ง์ ํ DRAM์ ์ ์์ ๊ณ ๋ฐ๋ DRAM ๋ค์ด๋ฅผ ๊ธฐ์กด์ ์ผ๋ฐ์ ์ธ DRAM ๋ค์ด์ ๋์์ ํ ์นฉ์ ์ ์ธตํ๊ณ , DRAM ๋ค์ด๋ผ๋ฆฌ๋ ์ ์ํ๋ 3D-XPath ํ๋์จ์ด๋ฅผ ํตํด ์ฐ๊ฒฐ๋๋ค. ์ด๋ฌํ 3D-XPath๋ ํซ ํ์ด์ง ์ค์์ด ์ผ์ด๋๋ ๋์ ์์ฉํ๋ก๊ทธ๋จ์ ๋ฉ๋ชจ๋ฆฌ ์์ฒญ์ ์ฐจ๋จํ์ง ์๊ณ ์ฌ์ฉ๋์ด ์ ์ ๋ฉ๋ชจ๋ฆฌ ์ฑ๋๋ก ํซ ํ์ด์ง ์ค์์ ์ฒ๋ฆฌ ํ ์ ์๋๋ก ํ์ฌ, ๋ฐ์ดํฐ ์ง์ค ์์ฉ ํ๋ก๊ทธ๋จ์ ๋ฐฑ๋ถ์ ์๋ต ์๊ฐ์ ๊ฐ์ ์ํจ๋ค. ๋ํ ์ ์ํ๋ ํ๋์จ์ด ๊ตฌ์กฐ๋ฅผ ์ฌ์ฉํ์ฌ, ์ถ๊ฐ์ ์ผ๋ก O/S ์ปค๋๊ณผ ์ ์ ์คํ์ด์ค ๊ฐ์ ๋ฉ๋ชจ๋ฆฌ ๋ธ๋ก์ ์์ฃผ ๋ณต์ฌํ๋ ์์ฉ ํ๋ก๊ทธ๋จ์ ์ฒ๋ฆฌ๋์ ํฅ์์ํฌ ์ ์๋ค. ์ด๋ฌํ 3D-XPath DRAM์ 3D-XPath๊ฐ ์๋ DRAM์ ๋นํด I/O ์ง์ฝ์ ์ธ ์์ฉํ๋ก๊ทธ๋จ์ ์ฒ๋ฆฌ๋์ ์ต๋ 39 % ํฅ์์ํค๋ฉด์ ๋ ์ดํด์์ ๋ฏผ๊ฐํ ์์ฉ ํ๋ก๊ทธ๋จ์ ๋์ ๋ฐฑ๋ถ์ ์๋ต ์๊ฐ์ ์ต๋ 30 %๊น์ง ๊ฐ์์ํฌ ์ ์๋ค.
๋ํ ์ต๊ทผ์ ์ปดํจํฐ ์์คํ
์ ๋ณด๋ค ๋ง์ ๋ฉ๋ชจ๋ฆฌ ๋์ญํญ๊ณผ ์ฉ๋์ ํ์๋กํ๋ ๋ ๋ง์ CPU ์ฝ์ด๋ฅผ ๋จ์ผ ์์ผ์ผ๋ก ํตํฉํ๋ ๋ฐฉํฅ์ผ๋ก ์งํํ๊ณ ์๋ค. ์ด๋ฌํ ์์ผ ๋น ์ฑ๋ ์๋ฅผ ๋๋ฆฌ๋ ๊ฒ์ ๋์ญํญ ์๊ตฌ์ ๋ํ ์ผ๋ฐ์ ์ธ ํด๊ฒฐ์ฑ
์ด๋ฉฐ, ์ต์ ์ DRAM ์ธํฐํ์ด์ค์ ๋ฐ์ ์์์ ์ฆ๊ฐํ ์ฑ๋์ ๋ณด๋ค ์ ํ์ฉํ๊ธฐ ์ํด ๋ฐ์ดํฐ ๋ฒ์ค ํญ์ด ๊ฐ์๋๊ณ ๋ฒ์คํธ ๊ธธ์ด๊ฐ ์ฆ๊ฐํ๋ค. ๊ทธ๋ฌ๋ ๊ธธ์ด์ง ๋ฒ์คํธ ๊ธธ์ด๋ DRAM ์ก์ธ์ค ๋๊ธฐ ์๊ฐ์ ์ฆ๊ฐ์ํจ๋ค. ์ถ๊ฐ์ ์ผ๋ก ์ต์ ์ ์์ฉํ๋ก๊ทธ๋จ์ ๋ ๋ง์ ๋ฉ๋ชจ๋ฆฌ ์ฉ๋์ ์๊ตฌํ๋ฉฐ, ๋ฏธ์ธ ๊ณต์ ์ผ๋ก ๋ฉ๋ชจ๋ฆฌ ์ฉ๋์ ์ฆ๊ฐ์ํค๋ ๋ฐฉ๋ฒ๋ก ์ ์์ญ ๋
๋์ ์ฌ์ฉ๋์์ง๋ง, 20 nm ์ดํ์ ๋ฏธ์ธ๊ณต์ ์์๋ ๋ ์ด์ ๊ณต์ ๋ฏธ์ธํ๋ฅผ ํตํด ๋ฉ๋ชจ๋ฆฌ ๋ฐ๋๋ฅผ ์ฆ๊ฐ์ํค๊ธฐ๊ฐ ์ด๋ ค์ด ์ํฉ์ด๋ฉฐ, ์ ์ธตํ ๋ฉ๋ชจ๋ฆฌ๋ฅผ ์ฌ์ฉํ์ฌ ์ฉ๋์ ์ฆ๊ฐ์ํค๋ ๋ฐฉ๋ฒ์ ์ฌ์ฉํ๋ค.
์ด๋ฌํ ์ํฉ์์, ์ค์ ์ต์ ์ ๋ฉํฐ์ฝ์ด ๋จธ์ ์์ SPEC CPU 2006 ์์ฉํ๋ก๊ทธ๋จ์ ๋ฉํฐ์ฝ์ด์์ ์คํํ์์ ๋, ํญ์ ์์คํ
์ ๋ชจ๋ ๋ฉ๋ชจ๋ฆฌ ์ปจํธ๋กค๋ฌ๊ฐ ์์ ํ ํ์ฉ๋์ง ์๋๋ค๋ ์ฌ์ค์ ๊ด์ฐฐํ๋ค. ์ด๋ฌํ ์ ํด ์ฑ๋์ ์ฌ์ฉํ๊ธฐ ์ํด ํ๋์ ๋ฉ๋ชจ๋ฆฌ ์ฑ๋์ ํผํฌ ๋์ญํญ์ ๋์ด๊ณ 3D ์คํ ๋ฉ๋ชจ๋ฆฌ์ ๋ฒ์คํธ ๋๊ธฐ ์๊ฐ์ ์ค์ด๊ธฐ ์ํด ๋ณธ ํ์ ๋
ผ๋ฌธ์์๋ ๋ฉ๋ชจ๋ฆฌ ์ฑ๋ ๊ณต์ ์ํคํ
์ฒ๋ฅผ ์ ์ํ์์ผ๋ฉฐ, ํ๋์จ์ด ๋ธ๋ก์ ์ ์ํ์๋ค. ์ด๋ฌํ ์ฑ๋ ๊ณต์ ๋ฅผ ํตํด ๋ฉํฐ ํ๋ก๊ทธ๋จ ๋ ์์ฉํ๋ก๊ทธ๋จ ๋ฐ ๋ค์ค ์ค๋ ๋ ์์ฉํ๋ก๊ทธ๋จ ์ฑ๋ฅ์ด ๊ฐ๊ฐ 4.3 % ๋ฐ 3.6 %๋ก ํฅ์๋์์ผ๋ฉฐ ํ๊ท ์ฝ๊ธฐ ๋๊ธฐ ์๊ฐ์ 8.22 % ๋ฐ 10.18 %๋ก ๊ฐ์ํ์๋ค.Contents
Abstract i
Contents iv
List of Figures vi
List of Tables viii
Introduction 1
1.1 3D-XPath: High-Density Managed DRAM Architecture with Cost-effective Alternative Paths for Memory Transactions 5
1.2 Boosting Bandwidth โ Dynamic Channel Sharing on 3D Stacked Memory 9
1.3 Research contribution 13
1.4 Outline 14
3D-stacked Heterogeneous Memory Architecture with Cost-effective Extra Block Transfer Paths 17
2.1 Background 17
2.1.1 Heterogeneous Main Memory Systems 17
2.1.2 Specialized DRAM 19
2.1.3 3D-stacked Memory 22
2.2 HIGH-DENSITY DRAM ARCHITECTURE 27
2.2.1 Key Design Challenges 29
2.2.2 Plausible High-density DRAM Designs 33
2.3 3D-STACKED DRAM WITH ALTERNATIVE PATHS FOR MEMORY TRANSACTIONS 37
2.3.1 3D-XPath Architecture 41
2.3.2 3D-XPath Management 46
2.4 EXPERIMENTAL METHODOLOGY 52
2.5 EVALUATION 56
2.5.1 OLDI Workloads 56
2.5.2 Non-OLDI Workloads 61
2.5.3 Sensitivity Analysis 66
2.6 RELATED WORK 70
Boosting bandwidth โDynamic Channel Sharing on 3D Stacked Memory 72
3.1 Background: Memory Operations 72
3.1.1. Memory Controller 72
3.1.2 DRAM column access sequence 73
3.2 Related Work 74
3.3. CHANNEL SHARING ENABLED MEMORY SYSTEM 76
3.3.1 Hardware Requirements 78
3.3.2 Operation Sequence 81
3.4 Analysis 87
3.4.1 Experiment Environment 87
3.4.2 Performance 88
3.4.3 Overhead 90
CONCLUSION 92
REFERENCES 94
๊ตญ๋ฌธ์ด๋ก 107Docto
Recommended from our members
Energy Agile Cluster Communication
Computing researchers have long focused on improving energy-efficiency?the amount of computation per joule? under the implicit assumption that all energy is created equal. Energy however is not created equal: its cost and carbon footprint fluctuates over time due to a variety of factors. These fluctuations are expected to in- tensify as renewable penetration increases. Thus in my work I introduce energy-agility a design concept for a platform?s ability to rapidly and efficiently adapt to such power fluctuations. I then introduce a representative application to assess energy-agility for the type of long-running, parallel, data-intensive tasks that are both common in data centers and most amenable to delays from variations in available power. Multiple variants of the application are implemented to illustrate the fundamental tradeoffs in designing energy-agile parallel applications. I find that with inactive power state transition latencies of up to 15 seconds, a design that regularly โblinksโ servers out- performs one that minimizes transitions by only changing power states when power varies. While the latter approach has much lower transition overhead, it requires additional I/O, since servers are not always concurrently active. Unfortunately, I find that most server-class platforms today are not energy-agile: they have transition la- tencies beyond one minute, forcing them to minimize transition and incur additional I/O
- โฆ