62 research outputs found
Improving Processor Design by Exploiting Performance Variance
Programs exhibit significant performance variance in their access to microarchitectural structures. There are three types of performance variance. First, semantically equivalent programs running on the same system can yield different performance due to characteristics of microarchitectural structures. Second, program phase behavior varies significantly. Third, different types of operations on microarchitectural structure can lead to different performance.
In this dissertation, we explore the performance variance and propose techniques to improve the processor design.
We explore performance variance caused by microarchitectural structures and propose program interferometry, a technique that perturbs benchmark executables to yield a wide variety of performance points without changing program semantics or other important execution characteristics such as the number of retired instructions. By observing the behavior of the benchmarks over a range of branch prediction accuracies, we can estimate the impact of a microarchitectural optimization optimization and not the rest of the microarchitecture.
We explore performance variance caused by phase changes and develop prediction-driven last-level cache (LLC) writeback techniques. We propose a rank idle time prediction driven LLC writeback technique and a last-write prediction driven LLC writeback technique. These techniques improve performance by reducing the write-induced interference.
We explore performance variance caused by different types of operations to Non-Volatile Memory (NVM) and propose LLC management policies to reduce write overhead of NVM.We propose an adaptive placement and migration policy for an STT-RAM-based hybrid cache and writeback aware dynamic cache management for NVM-based main memory system. These techniques reduce write latency and write energy, thus leading to performance improvement and energy reduction
Improving DRAM Performance by Parallelizing Refreshes with Accesses
Modern DRAM cells are periodically refreshed to prevent data loss due to
leakage. Commodity DDR DRAM refreshes cells at the rank level. This degrades
performance significantly because it prevents an entire rank from serving
memory requests while being refreshed. DRAM designed for mobile platforms,
LPDDR DRAM, supports an enhanced mode, called per-bank refresh, that refreshes
cells at the bank level. This enables a bank to be accessed while another in
the same rank is being refreshed, alleviating part of the negative performance
impact of refreshes. However, there are two shortcomings of per-bank refresh.
First, the per-bank refresh scheduling scheme does not exploit the full
potential of overlapping refreshes with accesses across banks because it
restricts the banks to be refreshed in a sequential round-robin order. Second,
accesses to a bank that is being refreshed have to wait.
To mitigate the negative performance impact of DRAM refresh, we propose two
complementary mechanisms, DARP (Dynamic Access Refresh Parallelization) and
SARP (Subarray Access Refresh Parallelization). The goal is to address the
drawbacks of per-bank refresh by building more efficient techniques to
parallelize refreshes and accesses within DRAM. First, instead of issuing
per-bank refreshes in a round-robin order, DARP issues per-bank refreshes to
idle banks in an out-of-order manner. Furthermore, DARP schedules refreshes
during intervals when a batch of writes are draining to DRAM. Second, SARP
exploits the existence of mostly-independent subarrays within a bank. With
minor modifications to DRAM organization, it allows a bank to serve memory
accesses to an idle subarray while another subarray is being refreshed.
Extensive evaluations show that our mechanisms improve system performance and
energy efficiency compared to state-of-the-art refresh policies and the benefit
increases as DRAM density increases.Comment: The original paper published in the International Symposium on
High-Performance Computer Architecture (HPCA) contains an error. The arxiv
version has an erratum that describes the error and the fix for i
Reducing DRAM Row Activations with Eager Writeback
This thesis describes and evaluates a new approach to optimizing DRAM performance and energy consumption that is based on eagerly writing dirty cache lines to DRAM. Under this approach, dirty cache lines that have not been recently accessed are eagerly written to DRAM when the corresponding row has been activated by an ordinary access, such as a read. This approach enables clustering of reads and writes that target the same row, resulting in a significant reduction in row activations. Specifically, for 29 applications, it reduces the number of DRAM row activations by an average of 38% and a maximum of 81%. The results from a full system simulator show that for the 29 applications, 11 have performance improvements between 10% and 20%, and 9 have improvements in excess of 20%. Furthermore, 10 consume between 10% and 20% less DRAM energy, and 10 have energy consumption reductions in excess of 20%
Understanding and Improving the Latency of DRAM-Based Memory Systems
Over the past two decades, the storage capacity and access bandwidth of main
memory have improved tremendously, by 128x and 20x, respectively. These
improvements are mainly due to the continuous technology scaling of DRAM
(dynamic random-access memory), which has been used as the physical substrate
for main memory. In stark contrast with capacity and bandwidth, DRAM latency
has remained almost constant, reducing by only 1.3x in the same time frame.
Therefore, long DRAM latency continues to be a critical performance bottleneck
in modern systems. Increasing core counts, and the emergence of increasingly
more data-intensive and latency-critical applications further stress the
importance of providing low-latency memory access.
In this dissertation, we identify three main problems that contribute
significantly to long latency of DRAM accesses. To address these problems, we
present a series of new techniques. Our new techniques significantly improve
both system performance and energy efficiency. We also examine the critical
relationship between supply voltage and latency in modern DRAM chips and
develop new mechanisms that exploit this voltage-latency trade-off to improve
energy efficiency.
The key conclusion of this dissertation is that augmenting DRAM architecture
with simple and low-cost features, and developing a better understanding of
manufactured DRAM chips together lead to significant memory latency reduction
as well as energy efficiency improvement. We hope and believe that the proposed
architectural techniques and the detailed experimental data and observations on
real commodity DRAM chips presented in this dissertation will enable
development of other new mechanisms to improve the performance, energy
efficiency, or reliability of future memory systems.Comment: PhD Dissertatio
Memory Systems and Interconnects for Scale-Out Servers
The information revolution of the last decade has been fueled by the digitization of almost all human activities through a wide range of Internet services. The backbone of this information age are scale-out datacenters that need to collect, store, and process massive amounts of data. These datacenters distribute vast datasets across a large number of servers, typically into memory-resident shards so as to maintain strict quality-of-service guarantees. While data is driving the skyrocketing demands for scale-out servers, processor and memory manufacturers have reached fundamental efficiency limits, no longer able to increase server energy efficiency at a sufficient pace. As a result, energy has emerged as the main obstacle to the scalability of information technology (IT) with huge economic implications. Delivering sustainable IT calls for a paradigm shift in computer system design. As memory has taken a central role in IT infrastructure, memory-centric architectures are required to fully utilize the IT's costly memory investment. In response, processor architects are resorting to manycore architectures to leverage the abundant request-level parallelism found in data-centric applications. Manycore processors fully utilize available memory resources, thereby increasing IT efficiency by almost an order of magnitude. Because manycore server chips execute a large number of concurrent requests, they exhibit high incidence of accesses to the last-level-cache for fetching instructions (due to large instruction footprints), and off-chip memory (due to lack of temporal reuse in on-chip caches) for accessing dataset objects. As a result, on-chip interconnects and the memory system are emerging as major performance and energy-efficiency bottlenecks in servers. This thesis seeks to architect on-chip interconnects and memory systems that are tuned for the requirements of memory-centric scale-out servers. By studying a wide range of data-centric applications, we uncover application phenomena common in data-centric applications, and examine their implications on on-chip network and off-chip memory traffic. Finally, we propose specialized on-chip interconnects and memory systems that leverage common traffic characteristics, thereby improving server throughput and energy efficiency
Software and hardware methods for memory access latency reduction on ILP processors
While microprocessors have doubled their speed every 18 months, performance improvement of memory systems has continued to lag behind. to address the speed gap between CPU and memory, a standard multi-level caching organization has been built for fast data accesses before the data have to be accessed in DRAM core. The existence of these caches in a computer system, such as L1, L2, L3, and DRAM row buffers, does not mean that data locality will be automatically exploited. The effective use of the memory hierarchy mainly depends on how data are allocated and how memory accesses are scheduled. In this dissertation, we propose several novel software and hardware techniques to effectively exploit the data locality and to significantly reduce memory access latency.;We first presented a case study at the application level that reconstructs memory-intensive programs by utilizing program-specific knowledge. The problem of bit-reversals, a set of data reordering operations extensively used in scientific computing program such as FFT, and an application with a special data access pattern that can cause severe cache conflicts, is identified in this study. We have proposed several software methods, including padding and blocking, to restructure the program to reduce those conflicts. Our methods outperform existing ones on both uniprocessor and multiprocessor systems.;The access latency to DRAM core has become increasingly long relative to CPU speed, causing memory accesses to be an execution bottleneck. In order to reduce the frequency of DRAM core accesses to effectively shorten the overall memory access latency, we have conducted three studies at this level of memory hierarchy. First, motivated by our evaluation of DRAM row buffer\u27s performance roles and our findings of the reasons of its access conflicts, we propose a simple and effective memory interleaving scheme to reduce or even eliminate row buffer conflicts. Second, we propose a fine-grain priority scheduling scheme to reorder the sequence of data accesses on multi-channel memory systems, effectively exploiting the available bus bandwidth and access concurrency. In the final part of the dissertation, we first evaluate the design of cached DRAM and its organization alternatives associated with ILP processors. We then propose a new memory hierarchy integration that uses cached DRAM to construct a very large off-chip cache. We show that this structure outperforms a standard memory system with an off-level L3 cache for memory-intensive applications.;Memory access latency has become a major performance bottleneck for memory-intensive applications. as long as DRAM technology remains its most cost-effective position for making main memory, the memory performance problem will continue to exist. The studies conducted in this dissertation attempt to address this important issue. Our proposed software and hardware schemes are effective and applicable, which can be directly used in real-world memory system designs and implementations. Our studies also provide guidance for application programmers to understand memory performance implications, and for system architects to optimize memory hierarchies
Recommended from our members
Mitigating bank conflicts in main memory via selective data duplication and migration
Main memory is organized as a hierarchy of banks, rows, and columns. Only data from a single row can be accessed from each bank at any given time. Switching between different rows of the same bank requires serializing long latency operations to the bank. Consequently, memory performance suffers on bank conflicts when concurrent requests access different rows of the same bank.
Many prior solutions to the bank conflict problem required modifications to the memory device and/or the memory access protocol. Such modifications create hurdles for adoption due to the commodity nature of the memory business. Instead, I propose two new runtime solutions that work with unmodified memory devices and access protocols. The first, Duplicon Cache, duplicates select data to multiple banks, allowing duplicated data to be sourced from either the original bank or the alternate bank, whichever is more lightly loaded. The second, Continuous Row Compaction, identifies data that are frequently accessed together, then migrates them to non-conflicting rows across different banks.
To limit the data transfer overhead from data duplication and migration, only select data are duplicated/migrated. The key is to identify large working sets of the running applications that remain stable over very long time intervals, and slowly duplicate/migrate them over time, amortizing the cost of duplication/migration. In effect, the set of duplicated/migrated data form a cache within main memory that captures large stable working sets of the application.Electrical and Computer Engineerin
새로운 메모리 기술을 기반으로 한 메모리 시스템 설계 기술
학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2017. 2. 최기영.Performance and energy efficiency of modern computer systems are largely dominated by the memory system. This memory bottleneck has been exacerbated in the past few years with (1) architectural innovations for improving the efficiency of computation units (e.g., chip multiprocessors), which shift the major cause of inefficiency from processors to memory, and (2) the emergence of data-intensive applications, which demands a large capacity of main memory and an excessive amount of memory bandwidth to efficiently handle such workloads. In order to address this memory wall challenge, this dissertation aims at exploring the potential of emerging memory technologies and designing a high-performance, energy-efficient memory hierarchy that is aware of and leverages the characteristics of such new memory technologies.
The first part of this dissertation focuses on energy-efficient on-chip cache design based on a new non-volatile memory technology called Spin-Transfer Torque RAM (STT-RAM). When STT-RAM is used to build on-chip caches, it provides several advantages over conventional charge-based memory (e.g., SRAM or eDRAM), such as non-volatility, lower static power, and higher density. However, simply replacing SRAM caches with STT-RAM rather increases the energy consumption because write operations of STT-RAM are slower and more energy-consuming than those of SRAM.
To address this challenge, we propose four novel architectural techniques that can alleviate the impact of inefficient STT-RAM write operations on system performance and energy consumption. First, we apply STT-RAM to instruction caches (where write operations are relatively infrequent) and devise a power-gating mechanism called LASIC, which leverages the non-volatility of STT-RAM to turn off STT-RAM instruction caches inside small loops. Second, we propose lower-bits cache, which exploits the narrow bit-width characteristics of application data by caching frequent bit-flips at lower bits in a small SRAM cache. Third, we present prediction hybrid cache, an SRAM/STT-RAM hybrid cache whose block placement between SRAM and STT-RAM is determined by predicting the write intensity of each cache block with a new hardware structure called write intensity predictor. Fourth, we propose DASCA, which predicts write operations that can bypass the cache without incurring extra cache misses (called dead writes) and lets the last-level cache bypass such dead writes to reduce write energy consumption.
The second part of this dissertation architects intelligent main memory and its host architecture support based on logic-enabled DRAM. Traditionally, main memory has served the sole purpose of storing data because the extra manufacturing cost of implementing rich functionality (e.g., computation) on a DRAM die was unacceptably high. However, the advent of 3D die stacking now provides a practical, cost-effective way to integrate complex logic circuits into main memory, thereby opening up the possibilities for intelligent main memory. For example, it can be utilized to implement advanced memory management features (e.g., scheduling, power management, etc.) inside memoryit can be also used to offload computation to main memory, which allows us to overcome the memory bandwidth bottleneck caused by narrow off-chip channels (commonly known as processing-in-memory or PIM). The remaining questions are what to implement inside main memory and how to integrate and expose such new features to existing systems.
In order to answer these questions, we propose four system designs that utilize logic-enabled DRAM to improve system performance and energy efficiency. First, we utilize the existing logic layer of a Hybrid Memory Cube (a commercial logic-enabled DRAM product) to (1) dynamically turn off some of its off-chip links by monitoring the actual bandwidth demand and (2) integrate prefetch buffer into main memory to perform aggressive prefetching without consuming off-chip link bandwidth. Second, we propose a scalable accelerator for large-scale graph processing called Tesseract, in which graph processing computation is offloaded to specialized processors inside main memory in order to achieve memory-capacity-proportional performance. Third, we design a low-overhead PIM architecture for near-term adoption called PIM-enabled instructions, where PIM operations are interfaced as cache-coherent, virtually-addressed host processor instructions that can be executed either by the host processor or in main memory depending on the data locality. Fourth, we propose an energy-efficient PIM system called aggregation-in-memory, which can adaptively execute PIM operations at any level of the memory hierarchy and provides a fully automated compiler toolchain that transforms existing applications to use PIM operations without programmer intervention.Chapter 1 Introduction 1
1.1 Inefficiencies in the Current Memory Systems 2
1.1.1 On-Chip Caches 2
1.1.2 Main Memory 2
1.2 New Memory Technologies: Opportunities and Challenges 3
1.2.1 Energy-Efficient On-Chip Caches based on STT-RAM 3
1.2.2 Intelligent Main Memory based on Logic-Enabled DRAM 6
1.3 Dissertation Overview 9
Chapter 2 Previous Work 11
2.1 Energy-Efficient On-Chip Caches based on STT-RAM 11
2.1.1 Hybrid Caches 11
2.1.2 Volatile STT-RAM 13
2.1.3 Redundant Write Elimination 14
2.2 Intelligent Main Memory based on Logic-Enabled DRAM 15
2.2.1 PIM Architectures in the 1990s 15
2.2.2 Modern PIM Architectures based on 3D Stacking 15
2.2.3 Modern PIM Architectures on Memory Dies 17
Chapter 3 Loop-Aware Sleepy Instruction Cache 19
3.1 Architecture 20
3.1.1 Loop Cache 21
3.1.2 Loop-Aware Sleep Controller 22
3.2 Evaluation and Discussion 24
3.2.1 Simulation Environment 24
3.2.2 Energy 25
3.2.3 Performance 27
3.2.4 Sensitivity Analysis 27
3.3 Summary 28
Chapter 4 Lower-Bits Cache 29
4.1 Architecture 29
4.2 Experiments 32
4.2.1 Simulator and Cache Model 32
4.2.2 Results 33
4.3 Summary 34
Chapter 5 Prediction Hybrid Cache 35
5.1 Problem and Motivation 37
5.1.1 Problem Definition 37
5.1.2 Motivation 37
5.2 Write Intensity Predictor 38
5.2.1 Keeping Track of Trigger Instructions 39
5.2.2 Identifying Hot Trigger Instructions 40
5.2.3 Dynamic Set Sampling 41
5.2.4 Summary 42
5.3 Prediction Hybrid Cache 43
5.3.1 Need for Write Intensity Prediction 43
5.3.2 Organization 43
5.3.3 Operations 44
5.3.4 Dynamic Threshold Adjustment 45
5.4 Evaluation Methodology 48
5.4.1 Simulator Configuration 48
5.4.2 Workloads 50
5.5 Single-Core Evaluations 51
5.5.1 Energy Consumption and Speedup 51
5.5.2 Energy Breakdown 53
5.5.3 Coverage and Accuracy 54
5.5.4 Sensitivity to Write Intensity Threshold 55
5.5.5 Impact of Dynamic Set Sampling 55
5.5.6 Results for Non-Write-Intensive Workloads 56
5.6 Multicore Evaluations 57
5.7 Summary 59
Chapter 6 Dead Write Prediction Assisted STT-RAM Cache 61
6.1 Motivation 62
6.1.1 Energy Impact of Inefficient Write Operations 62
6.1.2 Limitations of Existing Approaches 63
6.1.3 Potential of Dead Writes 64
6.2 Dead Write Classification 65
6.2.1 Dead-on-Arrival Fills 65
6.2.2 Dead-Value Fills 66
6.2.3 Closing Writes 66
6.2.4 Decomposition 67
6.3 Dead Write Prediction Assisted STT-RAM Cache Architecture 68
6.3.1 Dead Write Prediction 68
6.3.2 Bidirectional Bypass 71
6.4 Evaluation Methodology 72
6.4.1 Simulation Configuration 72
6.4.2 Workloads 74
6.5 Evaluation for Single-Core Systems 75
6.5.1 Energy Consumption and Speedup 75
6.5.2 Coverage and Accuracy 78
6.5.3 Sensitivity to Signature 78
6.5.4 Sensitivity to Update Policy 80
6.5.5 Implications of Device-/Circuit-Level Techniques for Write Energy Reduction 80
6.5.6 Impact of Prefetching 80
6.6 Evaluation for Multi-Core Systems 81
6.6.1 Energy Consumption and Speedup 81
6.6.2 Application to Inclusive Caches 83
6.6.3 Application to Three-Level Cache Hierarchy 84
6.7 Summary 85
Chapter 7 Link Power Management for Hybrid Memory Cubes 87
7.1 Background and Motivation 88
7.1.1 Hybrid Memory Cube 88
7.1.2 Motivation 89
7.2 HMC Link Power Management 91
7.2.1 Link Delay Monitor 91
7.2.2 Power State Transition 94
7.2.3 Overhead 95
7.3 Two-Level Prefetching 95
7.4 Application to Multi-HMC Systems 97
7.5 Experiments 98
7.5.1 Methodology 98
7.5.2 Link Energy Consumption and Speedup 100
7.5.3 HMC Energy Consumption 102
7.5.4 Runtime Behavior of LPM 102
7.5.5 Sensitivity to Slowdown Threshold 104
7.5.6 LPM without Prefetching 104
7.5.7 Impact of Prefetching on Link Traffic 105
7.5.8 On-Chip Prefetcher Aggressiveness in 2LP 107
7.5.9 Tighter Off-Chip Bandwidth Margin 107
7.5.10 Multithreaded Workloads 108
7.5.11 Multi-HMC Systems 109
7.6 Summary 111
Chapter 8 Tesseract PIM System for Parallel Graph Processing 113
8.1 Background and Motivation 115
8.1.1 Large-Scale Graph Processing 115
8.1.2 Graph Processing on Conventional Systems 117
8.1.3 Processing-in-Memory 118
8.2 Tesseract Architecture 119
8.2.1 Overview 119
8.2.2 Remote Function Call via Message Passing 122
8.2.3 Prefetching 124
8.2.4 Programming Interface 126
8.2.5 Application Mapping 127
8.3 Evaluation Methodology 128
8.3.1 Simulation Configuration 128
8.3.2 Workloads 129
8.4 Evaluation Results 130
8.4.1 Performance 130
8.4.2 Iso-Bandwidth Comparison 133
8.4.3 Execution Time Breakdown 134
8.4.4 Prefetch Efficiency 134
8.4.5 Scalability 135
8.4.6 Effect of Higher Off-Chip Network Bandwidth 136
8.4.7 Effect of Better Graph Distribution 137
8.4.8 Energy/Power Consumption and Thermal Analysis 138
8.5 Summary 139
Chapter 9 PIM-Enabled Instructions 141
9.1 Potential of ISA Extensions as the PIM Interface 143
9.2 PIM Abstraction 145
9.2.1 Operations 145
9.2.2 Memory Model 147
9.2.3 Software Modification 148
9.3 Architecture 148
9.3.1 Overview 148
9.3.2 PEI Computation Unit (PCU) 149
9.3.3 PEI Management Unit (PMU) 150
9.3.4 Virtual Memory Support 153
9.3.5 PEI Execution 153
9.3.6 Comparison with Active Memory Operations 154
9.4 Target Applications for Case Study 155
9.4.1 Large-Scale Graph Processing 155
9.4.2 In-Memory Data Analytics 156
9.4.3 Machine Learning and Data Mining 157
9.4.4 Operation Summary 157
9.5 Evaluation Methodology 158
9.5.1 Simulation Configuration 158
9.5.2 Workloads 159
9.6 Evaluation Results 159
9.6.1 Performance 160
9.6.2 Sensitivity to Input Size 163
9.6.3 Multiprogrammed Workloads 164
9.6.4 Balanced Dispatch: Idea and Evaluation 165
9.6.5 Design Space Exploration for PCUs 165
9.6.6 Performance Overhead of the PMU 167
9.6.7 Energy, Area, and Thermal Issues 167
9.7 Summary 168
Chapter 10 Aggregation-in-Memory 171
10.1 Motivation 173
10.1.1 Rethinking PIM for Energy Efficiency 173
10.1.2 Aggregation as PIM Operations 174
10.2 Architecture 176
10.2.1 Overview 176
10.2.2 Programming Model 177
10.2.3 On-Chip Caches 177
10.2.4 Coherence and Consistency 181
10.2.5 Main Memory 181
10.2.6 Potential Generalization Opportunities 183
10.3 Compiler Support 184
10.4 Contributions over Prior Art 185
10.4.1 PIM-Enabled Instructions 185
10.4.2 Parallel Reduction in Caches 187
10.4.3 Row Buffer Locality of DRAM Writes 188
10.5 Target Applications 188
10.6 Evaluation Methodology 190
10.6.1 Simulation Configuration 190
10.6.2 Hardware Overhead 191
10.6.3 Workloads 192
10.7 Evaluation Results 192
10.7.1 Energy Consumption and Performance 192
10.7.2 Dynamic Energy Breakdown 196
10.7.3 Comparison with Aggressive Writeback 197
10.7.4 Multiprogrammed Workloads 198
10.7.5 Comparison with Intrinsic-based Code 198
10.8 Summary 199
Chapter 11 Conclusion 201
11.1 Energy-Efficient On-Chip Caches based on STT-RAM 202
11.2 Intelligent Main Memory based on Logic-Enabled DRAM 203
Bibliography 205
요약 227Docto
Doctor of Philosophy
dissertationThe internet-based information infrastructure that has powered the growth of modern personal/mobile computing is composed of powerful, warehouse-scale computers or datacenters. These heavily subscribed datacenters perform data-processing jobs under intense quality of service guarantees. Further, high-performance compute platforms are being used to model and analyze increasingly complex scientific problems and natural phenomena. To ensure that the high-performance needs of these machines are met, it is necessary to increase the efficiency of the memory system that supplies data to the processing cores. Many of the microarchitectural innovations that were designed to scale the memory wall (e.g., out-of-order instruction execution, on-chip caches) are being rendered less effective due to several emerging trends (e.g., increased emphasis on energy consumption, limited access locality). This motivates the optimization of the main memory system itself. The key to an efficient main memory system is the memory controller. In particular, the scheduling algorithm in the memory controller greatly influences its performance. This dissertation explores this hypothesis in several contexts. It develops tools to better understand memory scheduling and develops scheduling innovations for CPUs and GPUs. We propose novel memory scheduling techniques that are strongly aware of the access patterns of the clients as well as the microarchitecture of the memory device. Based on these, we present (i) a Dynamic Random Access Memory (DRAM) chip microarchitecture optimized for reducing write-induced slowdown, (ii) a memory scheduling algorithm that exploits these features, (iii) several memory scheduling algorithms to reduce the memory-related stall experienced by irregular General Purpose Graphics Processing Unit (GPGPU) applications, and (iv) the Utah Simulated Memory Module (USIMM), a detailed, validated simulator for DRAM main memory that we use for analyzing and proposing scheduler algorithms
- …