361 research outputs found
Reducing consistency traffic and cache misses in the avalanche multiprocessor
Journal ArticleFor a parallel architecture to scale effectively, communication latency between processors must be avoided. We have found that the source of a large number of avoidable cache misses is the use of hardwired write-invalidate coherency protocols, which often exhibit high cache miss rates due to excessive invalidations and subsequent reloading of shared data. In the Avalanche project at the University of Utah, we are building a 64-node multiprocessor designed to reduce the end-to-end communication latency of both shared memory and message passing programs. As part of our design efforts, we are evaluating the potential performance benefits and implementation complexity of providing hardware support for multiple coherency protocols. Using a detailed architecture simulation of Avalanche, we have found that support for multiple consistency protocols can reduce the time parallel applications spend stalled on memory operations by up to 66% and overall execution time by up to 31%. Most of this reduction in memory stall time is due to a novel release-consistent multiple-writer write-update protocol implemented using a write state buffer
A Study of Client-based Caching for Parallel I/O
The trend in parallel computing toward large-scale cluster computers running thousands of cooperating processes per application has led to an I/O bottleneck that has only gotten more severe as the the number of processing cores per CPU has increased. Current parallel file systems are able to provide high bandwidth file access for large contiguous file region accesses; however, applications repeatedly accessing small file regions on unaligned file region boundaries continue to experience poor I/O throughput due to the high overhead associated with accessing parallel file system data. In this dissertation we demonstrate how client-side file data caching can improve parallel file system throughput for applications performing frequent small and unaligned file I/O. We explore the impacts of cache page size and cache capacity using the popular FLASH I/O benchmark and explore a novel cache sharing approach that leverages the trend toward multi-core processors. We also explore a technique we call progressive page caching that represents cache data using dynamic data structures rather than fixed-size pages of file data. Finally, we explore a cache aggregation scheme that leverages the high-level file I/O interfaces provided by the PVFS file system to provide further performance enhancements. In summary, our results indicate that a correctly configured middleware-based file data cache can dramatically improve the performance of I/O workloads dominated by small unaligned file accesses. Further, we demonstrate that a well designed cache can offer stable performance even when the selected cache page granularity is not well matched to the provided workload. Finally, we have shown that high-level file system interfaces can significantly accelerate application performance, and interfaces beyond those currently envisioned by the MPI-IO standard could provide further performance benefits
Improving Parallel I/O Performance Using Interval I/O
Today\u27s most advanced scientific applications run on large clusters consisting of hundreds of thousands of processing cores, access state of the art parallel file systems that allow files to be distributed across hundreds of storage targets, and utilize advanced interconnections systems that allow for theoretical I/O bandwidth of hundreds of gigabytes per second. Despite these advanced technologies, these applications often fail to obtain a reasonable proportion of available I/O bandwidth. The reasons for the poor performance of application I/O include the noncontiguous I/O access patterns used for scientific computing, contention due to false sharing, and the somewhat finicky nature of parallel file system performance. We argue that a more fundamental cause of this problem is the legacy view of a file as a linear sequence of bytes. To address these issues, we introduce a novel approach for parallel I/O called Interval I/O. Interval I/O is an innovative approach that uses application access patterns to partition a file into a series of intervals, which are used as the fundamental unit for subsequent I/O operations. Use of this approach provides superior performance for the noncontiguous access patterns which are frequently used by scientific applications. In addition, the approach reduces false contention and the unnecessary serialization it causes. Interval I/O also significantly increases the performance of atomic mode operations. Finally, the Interval I/O approach includes a technique for supporting parallel I/O for cooperating applications. We provide a prototype implementation of our Interval I/O system and use it to demonstrate performance improvements of as much as 1000% compared to ROMIO when using Interval I/O with several common benchmarks
λ°μ΄ν° μ§μ½μ μμ©μ ν¨μ¨μ μΈ μμ€ν μμ νμ©μ μν λ©λͺ¨λ¦¬ μλΈμμ€ν μ΅μ ν
νμλ
Όλ¬Έ (λ°μ¬) -- μμΈλνκ΅ λνμ : 곡과λν μ κΈ°Β·μ»΄ν¨ν°κ³΅νλΆ, 2020. 8. μΌνμ.With explosive data growth, data-intensive applications, such as relational database and key-value storage, have been increasingly popular in a variety of domains in recent years. To meet the growing performance demands of data-intensive applications, it is crucial to efficiently and fully utilize memory resources for the best possible performance.
However, general-purpose operating systems (OSs) are designed to provide system resources to applications running on a system in a fair manner at system-level. A single application may find it difficult to fully exploit the systems best performance due to this system-level fairness. For performance reasons, many data-intensive applications implement their own mechanisms that OSs already provide, under the assumption that they know better about the data than OSs. They can be greedily optimized for performance but this may result in inefficient use of system resources.
In this dissertation, we claim that simple OS support with minor application modifications can yield even higher application performance without sacrificing system-level resource utilization. We optimize and extend OS memory subsystem for better supporting applications while addressing three memory-related issues in data-intensive applications. First, we introduce a memory-efficient cooperative caching approach between application and kernel buffer to address double caching problem where the same data resides in multiple layers. Second, we present a memory-efficient, transparent zero-copy read I/O scheme to avoid the performance interference problem caused by memory copy behavior during I/O. Third, we propose a memory-efficient fork-based checkpointing mechanism for in-memory database systems to mitigate the memory footprint problem of the existing fork-based checkpointing scheme; memory usage increases incrementally (up to 2x) during checkpointing for update-intensive workloads.
To show the effectiveness of our approach, we implement and evaluate our schemes on real multi-core systems. The experimental results demonstrate that our cooperative approach can more effectively address the above issues related to data-intensive applications than existing non-cooperative approaches while delivering better performance (in terms of transaction processing speed, I/O throughput, or memory footprint).μ΅κ·Ό νλ°μ μΈ λ°μ΄ν° μ±μ₯κ³Ό λλΆμ΄ λ°μ΄ν°λ² μ΄μ€, ν€-λ°Έλ₯ μ€ν λ¦¬μ§ λ±μ λ°μ΄ν° μ§μ½μ μΈ μμ©λ€μ΄ λ€μν λλ©μΈμμ μΈκΈ°λ₯Ό μ»κ³ μλ€. λ°μ΄ν° μ§μ½μ μΈ μμ©μ λμ μ±λ₯ μꡬλ₯Ό μΆ©μ‘±νκΈ° μν΄μλ μ£Όμ΄μ§ λ©λͺ¨λ¦¬ μμμ ν¨μ¨μ μ΄κ³ μλ²½νκ² νμ©νλ κ²μ΄ μ€μνλ€. κ·Έλ¬λ, λ²μ© μ΄μ체μ (OS)λ μμ€ν
μμ μν μ€μΈ λͺ¨λ μμ©λ€μ λν΄ μμ€ν
μ°¨μμμ 곡ννκ² μμμ μ 곡νλ κ²μ μ°μ νλλ‘ μ€κ³λμ΄μλ€. μ¦, μμ€ν
μ°¨μμ 곡νμ± μ μ§λ₯Ό μν μ΄μ체μ μ§μμ νκ³λ‘ μΈν΄ λ¨μΌ μμ©μ μμ€ν
μ μ΅κ³ μ±λ₯μ μμ ν νμ©νκΈ° μ΄λ ΅λ€. μ΄λ¬ν μ΄μ λ‘, λ§μ λ°μ΄ν° μ§μ½μ μμ©μ μ΄μ체μ μμ μ 곡νλ κΈ°λ₯μ μμ§νμ§ μκ³ λΉμ·ν κΈ°λ₯μ μμ© λ 벨μ ꡬννκ³€ νλ€. μ΄λ¬ν μ κ·Ό λ°©λ²μ νμμ μΈ μ΅μ νκ° κ°λ₯νλ€λ μ μμ μ±λ₯ μ μ΄λμ΄ μμ μ μμ§λ§, μμ€ν
μμμ λΉν¨μ¨μ μΈ μ¬μ©μ μ΄λν μ μλ€.
λ³Έ λ
Όλ¬Έμμλ μ΄μ체μ μ μ§μκ³Ό μ½κ°μ μμ© μμ λ§μΌλ‘λ λΉν¨μ¨μ μΈ μμ€ν
μμ μ¬μ© μμ΄ λ³΄λ€ λμ μμ© μ±λ₯μ λ³΄μΌ μ μμμ μ¦λͺ
νκ³ μ νλ€. κ·Έλ¬κΈ° μν΄ μ΄μ체μ μ λ©λͺ¨λ¦¬ μλΈμμ€ν
μ μ΅μ ν λ° νμ₯νμ¬ λ°μ΄ν° μ§μ½μ μΈ μμ©μμ λ°μνλ μΈ κ°μ§ λ©λͺ¨λ¦¬ κ΄λ ¨ λ¬Έμ λ₯Ό ν΄κ²°νμλ€. 첫째, λμΌν λ°μ΄ν°κ° μ¬λ¬ κ³μΈ΅μ μ‘΄μ¬νλ μ€λ³΅ μΊμ± λ¬Έμ λ₯Ό ν΄κ²°νκΈ° μν΄ μμ©κ³Ό 컀λ λ²νΌ κ°μ λ©λͺ¨λ¦¬ ν¨μ¨μ μΈ νλ ₯ μΊμ± λ°©μμ μ μνμλ€. λμ§Έ, μ
μΆλ ₯μ λ°μνλ λ©λͺ¨λ¦¬ 볡μ¬λ‘ μΈν μ±λ₯ κ°μ λ¬Έμ λ₯Ό νΌνκΈ° μν΄ λ©λͺ¨λ¦¬ ν¨μ¨μ μΈ λ¬΄λ³΅μ¬ μ½κΈ° μ
μΆλ ₯ λ°©μμ μ μνμλ€. μ
μ§Έ, μΈ-λ©λͺ¨λ¦¬ λ°μ΄ν°λ² μ΄μ€ μμ€ν
μ μν λ©λͺ¨λ¦¬ ν¨μ¨μ μΈ fork κΈ°λ° μ²΄ν¬ν¬μΈνΈ κΈ°λ²μ μ μνμ¬ κΈ°μ‘΄ ν¬ν¬ κΈ°λ° μ²΄ν¬ν¬μΈνΈ κΈ°λ²μμ λ°μνλ λ©λͺ¨λ¦¬ μ¬μ©λ μ¦κ° λ¬Έμ λ₯Ό μννμλ€; κΈ°μ‘΄ λ°©μμ μ
λ°μ΄νΈ μ§μ½μ μν¬λ‘λμ λν΄ μ²΄ν¬ν¬μΈν
μ μννλ λμ λ©λͺ¨λ¦¬ μ¬μ©λμ΄ μ΅λ 2λ°°κΉμ§ μ μ§μ μΌλ‘ μ¦κ°ν μ μμλ€.
λ³Έ λ
Όλ¬Έμμλ μ μν λ°©λ²λ€μ ν¨κ³Όλ₯Ό μ¦λͺ
νκΈ° μν΄ μ€μ λ©ν° μ½μ΄ μμ€ν
μ ꡬννκ³ κ·Έ μ±λ₯μ νκ°νμλ€. μ€νκ²°κ³Όλ₯Ό ν΅ν΄ μ μν νλ ₯μ μ κ·Όλ°©μμ΄ κΈ°μ‘΄μ λΉνλ ₯μ μ κ·Όλ°©μλ³΄λ€ λ°μ΄ν° μ§μ½μ μμ©μκ² ν¨μ¨μ μΈ λ©λͺ¨λ¦¬ μμ νμ©μ
κ°λ₯νκ² ν¨μΌλ‘μ¨ λ λμ μ±λ₯μ μ 곡ν μ μμμ νμΈν μ μμλ€.Chapter 1 Introduction 1
1.1 Motivation 1
1.1.1 Importance of Memory Resources 1
1.1.2 Problems 2
1.2 Contributions 5
1.3 Outline 6
Chapter 2 Background 7
2.1 Linux Kernel Memory Management 7
2.1.1 Page Cache 7
2.1.2 Page Reclamation 8
2.1.3 Page Table and TLB Shootdown 9
2.1.4 Copy-on-Write 10
2.2 Linux Support for Applications 11
2.2.1 fork 11
2.2.2 madvise 11
2.2.3 Direct I/O 12
2.2.4 mmap 13
Chapter 3 Memory Efficient Cooperative Caching 14
3.1 Motivation 14
3.1.1 Problems of Existing Datastore Architecture 14
3.1.2 Proposed Architecture 17
3.2 Related Work 17
3.3 Design and Implementation 19
3.3.1 Overview 19
3.3.2 Kernel Support 24
3.3.3 Migration to DBIO 25
3.4 Evaluation 27
3.4.1 System Configuration 27
3.4.2 Methodology 28
3.4.3 TPC-C Benchmarks 30
3.4.4 YCSB Benchmarks 32
3.5 Summary 37
Chapter 4 Memory Efficient Zero-copy I/O 38
4.1 Motivation 38
4.1.1 The Problems of Copy-Based I/O 38
4.2 Related Work 40
4.2.1 Zero Copy I/O 40
4.2.2 TLB Shootdown 42
4.2.3 Copy-on-Write 43
4.3 Design and Implementation 44
4.3.1 Prerequisites for z-READ 44
4.3.2 Overview of z-READ 45
4.3.3 TLB Shootdown Optimization 48
4.3.4 Copy-on-Write Optimization 52
4.3.5 Implementation 55
4.4 Evaluation 55
4.4.1 System Configurations 56
4.4.2 Effectiveness of the TLB Shootdown Optimization 57
4.4.3 Effectiveness of CoW Optimization 59
4.4.4 Analysis of the Performance Improvement 62
4.4.5 Performance Interference Intensity 63
4.4.6 Effectiveness of z-READ in Macrobenchmarks 65
4.5 Summary 67
Chapter 5 Memory Efficient Fork-based Checkpointing 69
5.1 Motivation 69
5.1.1 Fork-based Checkpointing 69
5.1.2 Approach 71
5.2 Related Work 73
5.3 Design and Implementation 74
5.3.1 Overview 74
5.3.2 OS Support 78
5.3.3 Implementation 79
5.4 Evaluation 80
5.4.1 Experimental Setup 80
5.4.2 Performance 81
5.5 Summary 86
Chapter 6 Conclusion 87
μμ½ 100Docto
Avalanche: A communication and memory architecture for scalable parallel computing
technical reportAs the gap between processor and memory speeds widens, system designers will inevitably incorporate increasingly deep memory hierarchies to maintain the balance between processor and memory system performance. At the same time, most communication subsystems are permitted access only to main memory and not a processor's top level cache. As memory latencies increase, this lack of integration between the memory and communication systems will seriously impede interprocessor communication performance and limit effective scalability. In the Avalanche project we are redesigning the memory architecture of a commercial RISC multiprocessor, the HP PA-RISC 7100, to include a new multi-level context sensitive cache that is tightly coupled to the communication fabric. The primary goal of Avalanche's integrated cache and communication controller is attacking end to end communication latency in all of its forms. This includes cache misses induced by excessive invalidations and reloading of shared data by write-invalidate coherence protocols and cache misses induced by depositing incoming message data in main memory and faulting it into the cache. An execution-driven simulation study of Avalanche's architecture indicates that it can reduce cache stalls by 5-60% and overall execution times by 10-28%
2OS
In this book I approach the problem of understanding an OS from the point of view of a C programmer who needs to understand enough of how an OS works to program efficiently and avoid traps and pitfalls arising from not understanding what is happening underneath you. If you have a deep understanding of the memory system, you will not program in a style that loses significant performance by breaking the assumptions of the OS designer. If you have an understanding of how IO works, you can make good use of OS services. As you work through this book you will see other examples
Towards Successful Application of Phase Change Memories: Addressing Challenges from Write Operations
The emerging Phase Change Memory (PCM) technology is drawing increasing attention due to its advantages in non-volatility, byte-addressability and scalability. It is regarded as a promising candidate for future main memory. However, PCM's write operation has some limitations that pose challenges to its application in memory. The disadvantages include long write latency, high write power and limited write endurance.
In this thesis, I present my effort towards successful application of PCM memory. My research consists of several optimizing techniques at both the circuit and architecture level. First, at the circuit level, I propose Differential Write to remove unnecessary bit changes in PCM writes. This is not only beneficial to endurance but also to the energy and latency of writes. Second, I propose two memory scheduling enhancements (AWP and RAWP) for a non-blocking bank design. My memory scheduling enhancements can exploit intra-bank parallelism provided by non-blocking bank design, and achieve significant throughput improvement. Third, I propose Bit Level Power Budgeting (BPB), a fine-grained power budgeting technique that leverages the information from Differential Write to achieve even higher memory throughput under the same power budget. Fourth, I propose techniques to improve the QoS tuning ability of high-priority applications when running on PCM memory.
In summary, the techniques I propose effectively address the challenges of PCM's write operations. In addition, I present the experimental infrastructure in this work and my visions of potential future research topics, which could be helpful to other researchers in the area
Avalanche: A communication and memory architecture for scalable parallel computing
technical reportAs the gap between processor and memory speeds widens?? system designers will inevitably incorpo rate increasingly deep memory hierarchies to maintain the balance between processor and memory system performance At the same time?? most communication subsystems are permitted access only to main memory and not a processor s top level cache As memory latencies increase?? this lack of integration between the memory and communication systems will seriously impede interprocessor communication performance and limit e ective scalability In the Avalanche project we are re designing the memory architecture of a commercial RISC multiprocessor?? the HP PA RISC ?? to include a new multi level context sensitive cache that is tightly coupled to the communication fabric The primary goal of Avalanche s integrated cache and communication controller is attack ing end to end communication latency in all of its forms This includes cache misses induced by excessive invalidations and reloading of shared data by write invalidate coherence protocols and cache misses induced by depositing incoming message data in main memory and faulting it into the cache An execution driven simulation study of Avalanche s architecture indicates that it can reduce cache stalls by and overall execution times b
Rethinking the I/O Stack for Persistent Memory
Modern operating systems have been designed around the hypotheses that (a) memory is both byte-addressable and volatile and (b) storage is block addressable and persistent. The arrival of new Persistent Memory (PM) technologies, has made these assumptions obsolete. Despite much of the recent work in this space, the need for consistently sharing PM data across multiple applications remains an urgent, unsolved problem. Furthermore, the availability of simple yet powerful operating system support remains elusive.
In this dissertation, we propose and build The Region System β a high-performance operating system stack for PM that implements usable consistency and persistence for application data. The region system provides support for consistently mapping and sharing data resident in PM across user application address spaces. The region system creates a novel IPI based PMSYNC operation, which ensures atomic persistence of mapped pages across multiple address spaces. This allows applications to consume PM using the well understood and much desired memory like model with an easy-to-use interface. Next, we propose a metadata structure without any redundant metadata to reduce CPU cache flushes. The high-performance design minimizes the expensive PM ordering and durability operations by embracing a minimalistic approach to metadata construction and management.
To strengthen the case for the region system, in this dissertation, we analyze different types of applications to identify their dependence on memory mapped data usage, and propose user level libraries LIBPM-R and LIBPMEMOBJ-R to support shared persistent containers. The user level libraries along with the region system demonstrate a comprehensive end-to-end software stack for consuming the PM devices
- β¦