28 research outputs found
Reducing DRAM Row Activations with Eager Writeback
This thesis describes and evaluates a new approach to optimizing DRAM performance and energy consumption that is based on eagerly writing dirty cache lines to DRAM. Under this approach, dirty cache lines that have not been recently accessed are eagerly written to DRAM when the corresponding row has been activated by an ordinary access, such as a read. This approach enables clustering of reads and writes that target the same row, resulting in a significant reduction in row activations. Specifically, for 29 applications, it reduces the number of DRAM row activations by an average of 38% and a maximum of 81%. The results from a full system simulator show that for the 29 applications, 11 have performance improvements between 10% and 20%, and 9 have improvements in excess of 20%. Furthermore, 10 consume between 10% and 20% less DRAM energy, and 10 have energy consumption reductions in excess of 20%
TensorDash: Exploiting Sparsity to Accelerate Deep Neural Network Training and Inference
TensorDash is a hardware level technique for enabling data-parallel MAC units
to take advantage of sparsity in their input operand streams. When used to
compose a hardware accelerator for deep learning, TensorDash can speedup the
training process while also increasing energy efficiency. TensorDash combines a
low-cost, sparse input operand interconnect comprising an 8-input multiplexer
per multiplier input, with an area-efficient hardware scheduler. While the
interconnect allows a very limited set of movements per operand, the scheduler
can effectively extract sparsity when it is present in the activations, weights
or gradients of neural networks. Over a wide set of models covering various
applications, TensorDash accelerates the training process by
while being more energy-efficient, more energy
efficient when taking on-chip and off-chip memory accesses into account. While
TensorDash works with any datatype, we demonstrate it with both
single-precision floating-point units and bfloat16
Software and hardware methods for memory access latency reduction on ILP processors
While microprocessors have doubled their speed every 18 months, performance improvement of memory systems has continued to lag behind. to address the speed gap between CPU and memory, a standard multi-level caching organization has been built for fast data accesses before the data have to be accessed in DRAM core. The existence of these caches in a computer system, such as L1, L2, L3, and DRAM row buffers, does not mean that data locality will be automatically exploited. The effective use of the memory hierarchy mainly depends on how data are allocated and how memory accesses are scheduled. In this dissertation, we propose several novel software and hardware techniques to effectively exploit the data locality and to significantly reduce memory access latency.;We first presented a case study at the application level that reconstructs memory-intensive programs by utilizing program-specific knowledge. The problem of bit-reversals, a set of data reordering operations extensively used in scientific computing program such as FFT, and an application with a special data access pattern that can cause severe cache conflicts, is identified in this study. We have proposed several software methods, including padding and blocking, to restructure the program to reduce those conflicts. Our methods outperform existing ones on both uniprocessor and multiprocessor systems.;The access latency to DRAM core has become increasingly long relative to CPU speed, causing memory accesses to be an execution bottleneck. In order to reduce the frequency of DRAM core accesses to effectively shorten the overall memory access latency, we have conducted three studies at this level of memory hierarchy. First, motivated by our evaluation of DRAM row buffer\u27s performance roles and our findings of the reasons of its access conflicts, we propose a simple and effective memory interleaving scheme to reduce or even eliminate row buffer conflicts. Second, we propose a fine-grain priority scheduling scheme to reorder the sequence of data accesses on multi-channel memory systems, effectively exploiting the available bus bandwidth and access concurrency. In the final part of the dissertation, we first evaluate the design of cached DRAM and its organization alternatives associated with ILP processors. We then propose a new memory hierarchy integration that uses cached DRAM to construct a very large off-chip cache. We show that this structure outperforms a standard memory system with an off-level L3 cache for memory-intensive applications.;Memory access latency has become a major performance bottleneck for memory-intensive applications. as long as DRAM technology remains its most cost-effective position for making main memory, the memory performance problem will continue to exist. The studies conducted in this dissertation attempt to address this important issue. Our proposed software and hardware schemes are effective and applicable, which can be directly used in real-world memory system designs and implementations. Our studies also provide guidance for application programmers to understand memory performance implications, and for system architects to optimize memory hierarchies
Energy-efficient and cost-effective reliability design in memory systems
Reliability of memory systems is increasingly a concern as memory density increases, the cell dimension shrinks and new memory technologies move close to commercial use. Meanwhile, memory power efficiency has become another first-order consideration in memory system design. Conventional reliability scheme uses ECC (Error Correcting Code) and EDC (Error Detecting Code) to support error correction and detection in memory systems, putting a rigid constraint on memory organizations and incurring a significant overhead regarding the power efficiency and area cost.
This dissertation studies energy-efficient and cost-effective reliability design on both cache and main memory systems. It first explores the generic approach called embedded ECC in main memory systems to provide a low-cost and efficient reliability design. A scheme called E3CC (Enhanced Embedded ECC) is proposed for sub-ranked low-power memories to alleviate the concern on reliability. In the design, it proposes a novel BCRM (Biased Chinese Remainder Mapping) to resolve the address mapping issue in page-interleaving scheme. The proposed BCRM scheme provides an opportunity for building flexible reliability system, which favors the consumer-level computers to save power consumption.
Within the proposed E3CC scheme, we further explore address mapping schemes at DRAM device level to provide SEP (Selective Error Protection). We explore a group of address mapping schemes at DRAM device level to map memory requests to their designated regions. All the proposed address mapping schemes are based on modulo operation. They will be proven, in this thesis, to be efficient, flexible and promising to various scenarios to favor system requirements.
Additionally, we propose Free ECC reliability design for compressed cache schemes. It utilizes the unused fragments in compressed cache to store ECC. Such a design not only reduces the chip overhead but also improves cache utilization and power efficiency. In the design, we propose an efficient convergent cache allocation scheme to organize the compressed data blocks more effectively than existing schemes. This new design makes compressed cache an increasingly viable choice for processors with requirements of high reliability.
Furthermore, we propose a novel, system-level scheme of memory error detection based on memory integrity check, called MemGuard, to detect memory errors. It uses memory log hashes to ensure, by strong probability, that memory read log and write log match with each other. It is much stronger than conventional protection in error detection and incurs little hardware cost, no storage overhead and little power overhead. It puts no constraints on memory organization and no major complication to processor design and operating system design. In the thesis, we prove that the MemGuard reliability design is simple, robust and efficient
Doctor of Philosophy
dissertationThe internet-based information infrastructure that has powered the growth of modern personal/mobile computing is composed of powerful, warehouse-scale computers or datacenters. These heavily subscribed datacenters perform data-processing jobs under intense quality of service guarantees. Further, high-performance compute platforms are being used to model and analyze increasingly complex scientific problems and natural phenomena. To ensure that the high-performance needs of these machines are met, it is necessary to increase the efficiency of the memory system that supplies data to the processing cores. Many of the microarchitectural innovations that were designed to scale the memory wall (e.g., out-of-order instruction execution, on-chip caches) are being rendered less effective due to several emerging trends (e.g., increased emphasis on energy consumption, limited access locality). This motivates the optimization of the main memory system itself. The key to an efficient main memory system is the memory controller. In particular, the scheduling algorithm in the memory controller greatly influences its performance. This dissertation explores this hypothesis in several contexts. It develops tools to better understand memory scheduling and develops scheduling innovations for CPUs and GPUs. We propose novel memory scheduling techniques that are strongly aware of the access patterns of the clients as well as the microarchitecture of the memory device. Based on these, we present (i) a Dynamic Random Access Memory (DRAM) chip microarchitecture optimized for reducing write-induced slowdown, (ii) a memory scheduling algorithm that exploits these features, (iii) several memory scheduling algorithms to reduce the memory-related stall experienced by irregular General Purpose Graphics Processing Unit (GPGPU) applications, and (iv) the Utah Simulated Memory Module (USIMM), a detailed, validated simulator for DRAM main memory that we use for analyzing and proposing scheduler algorithms
Recommended from our members
Efficient fine-grained virtual memory
Virtual memory in modern computer systems provides a single abstraction of the memory hierarchy.
By hiding fragmentation and overlays of physical memory, virtual memory frees applications from managing physical memory and improves programmability.
However, virtual memory often introduces noticeable overhead.
State-of-the-art systems use a paged virtual memory that maps virtual addresses to physical addresses
in page granularity (typically 4 KiB ).This mapping is stored as a page table. Before accessing physically addressed memory, the page table is accessed
to translate virtual addresses to physical addresses. Research shows that the overhead of accessing the page table can even exceed the execution time for some important applications.
In addition, this fine-grained mapping changes the access patterns between virtual and physical address spaces, introducing difficulties to many architecture techniques, such as caches and prefecthers.
In this dissertation, I propose architecture mechanisms to reduce the overhead of accessing and managing fine-grained virtual memory without compromising existing benefits.
There are three main contributions in this dissertation.
First, I investigate the impact of address translation on cache. I examine the restriction of virtually indexed, physically tagged (VIPT) caches with fine-grained paging and conclude that this restriction may lead to sub-optimal cache designs.
I introduce a novel cache strategy, speculatively indexed, physically tagged (SIPT) to enable flexible cache indexing under fine-grained page mapping.
SIPT speculates on the value of a few more index bits (1 - 3 in our experiments) to access the cache speculatively before translation, and then verify that the physical tag matches after translation.
Utilizing the fact that a simple relation generally exists between virtual and physical addresses, because memory allocators often exhibit contiguity, I also propose low-cost mechanisms to predict and correct potential mis-speculations.
Next, I focus on reducing the overhead of address translation for fine-grained virtual memory. I propose a novel architecture mechanism, Embedded Page Translation Information (EMPTI),
to provide general fine-grained page translation information on top of coarse-grained virtual memory.
EMPTI does so by speculating that a virtual address is mapped to a pre-determined physical location and then verifying the translation with a very-low-cost access to metadata embedded with data.
Coarse-grained virtual memory mechanisms (e.g., segmentation) are used to suggest the pre-determined physical location for each virtual page.
Overall, EMPTI achieves the benefits of low overhead translation while keeping the flexibility and programmability of fine-grained paging.
Finally, I improve the efficiency of metadata caching based on the fact that memory mapping contiguity generally exists beyond a page boundary.
In state-of-the-art architectures, caches treat PTEs (page table entries) as regular data. Although this is simple and straightforward,
it fails to maximize the storage efficiency of metadata.
Each page in the contiguously mapped region costs a full 8-byte PTE. However, the delta between virtual addresses and physical addresses remain the same and most metadata are identical.
I propose a novel microarchitectural mechanism that expands the effective PTE storage in the last-level-cache (LLC) and reduces the number of page-walk accesses that miss the LLC.Electrical and Computer Engineerin
Architecting heterogeneous memory systems with 3D die-stacked memory
The main objective of this research is to efficiently enable 3D die-stacked memory and heterogeneous memory systems. 3D die-stacking is an emerging technology that allows for large amounts of in-package high-bandwidth memory storage. Die-stacked memory has the potential to provide extraordinary performance and energy benefits for computing environments, from data-intensive to mobile computing. However, incorporating die-stacked memory into computing environments requires innovations across the system stack from hardware and software. This dissertation presents several architectural innovations to practically deploy die-stacked memory into a variety of computing systems.
First, this dissertation proposes using die-stacked DRAM as a hardware-managed cache in a practical and efficient way. The proposed DRAM cache architecture employs two novel techniques: hit-miss speculation and self-balancing dispatch. The proposed techniques virtually eliminate the hardware overhead of maintaining a multi-megabytes SRAM structure, when scaling to gigabytes of stacked DRAM caches, and improve overall memory bandwidth utilization.
Second, this dissertation proposes a DRAM cache organization that provides a high level of reliability for die-stacked DRAM caches in a cost-effective manner. The proposed DRAM cache uses error-correcting code (ECCs), strong checksums (CRCs), and dirty data duplication to detect and correct a wide range of stacked DRAM failures—from traditional bit errors to large-scale row, column, bank, and channel failures—within the constraints of commodity, non-ECC DRAM stacks. With only a modest performance degradation compared to a DRAM cache with no ECC support, the proposed organization can correct all single-bit failures, and 99.9993% of all row, column, and bank failures.
Third, this dissertation proposes architectural mechanisms to use large, fast, on-chip memory structures as part of memory (PoM) seamlessly through the hardware. The proposed design achieves the performance benefit of on-chip memory caches without sacrificing a large fraction of total memory capacity to serve as a cache. To achieve this, PoM implements the ability to dynamically remap regions of memory based on their access patterns and expected performance benefits.
Lastly, this dissertation explores a new usage model for die-stacked DRAM involving a hybrid of caching and virtual memory support. In the common case where system’s physical memory is not over-committed, die-stacked DRAM operates as a cache to provide performance and energy benefits to the system. However, when the workload’s active memory demands exceed the capacity of the physical memory, the proposed scheme dynamically converts the stacked DRAM cache into a fast swap device to avoid the otherwise grievous performance penalty of swapping to disk.Ph.D
새로운 메모리 기술을 기반으로 한 메모리 시스템 설계 기술
학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2017. 2. 최기영.Performance and energy efficiency of modern computer systems are largely dominated by the memory system. This memory bottleneck has been exacerbated in the past few years with (1) architectural innovations for improving the efficiency of computation units (e.g., chip multiprocessors), which shift the major cause of inefficiency from processors to memory, and (2) the emergence of data-intensive applications, which demands a large capacity of main memory and an excessive amount of memory bandwidth to efficiently handle such workloads. In order to address this memory wall challenge, this dissertation aims at exploring the potential of emerging memory technologies and designing a high-performance, energy-efficient memory hierarchy that is aware of and leverages the characteristics of such new memory technologies.
The first part of this dissertation focuses on energy-efficient on-chip cache design based on a new non-volatile memory technology called Spin-Transfer Torque RAM (STT-RAM). When STT-RAM is used to build on-chip caches, it provides several advantages over conventional charge-based memory (e.g., SRAM or eDRAM), such as non-volatility, lower static power, and higher density. However, simply replacing SRAM caches with STT-RAM rather increases the energy consumption because write operations of STT-RAM are slower and more energy-consuming than those of SRAM.
To address this challenge, we propose four novel architectural techniques that can alleviate the impact of inefficient STT-RAM write operations on system performance and energy consumption. First, we apply STT-RAM to instruction caches (where write operations are relatively infrequent) and devise a power-gating mechanism called LASIC, which leverages the non-volatility of STT-RAM to turn off STT-RAM instruction caches inside small loops. Second, we propose lower-bits cache, which exploits the narrow bit-width characteristics of application data by caching frequent bit-flips at lower bits in a small SRAM cache. Third, we present prediction hybrid cache, an SRAM/STT-RAM hybrid cache whose block placement between SRAM and STT-RAM is determined by predicting the write intensity of each cache block with a new hardware structure called write intensity predictor. Fourth, we propose DASCA, which predicts write operations that can bypass the cache without incurring extra cache misses (called dead writes) and lets the last-level cache bypass such dead writes to reduce write energy consumption.
The second part of this dissertation architects intelligent main memory and its host architecture support based on logic-enabled DRAM. Traditionally, main memory has served the sole purpose of storing data because the extra manufacturing cost of implementing rich functionality (e.g., computation) on a DRAM die was unacceptably high. However, the advent of 3D die stacking now provides a practical, cost-effective way to integrate complex logic circuits into main memory, thereby opening up the possibilities for intelligent main memory. For example, it can be utilized to implement advanced memory management features (e.g., scheduling, power management, etc.) inside memoryit can be also used to offload computation to main memory, which allows us to overcome the memory bandwidth bottleneck caused by narrow off-chip channels (commonly known as processing-in-memory or PIM). The remaining questions are what to implement inside main memory and how to integrate and expose such new features to existing systems.
In order to answer these questions, we propose four system designs that utilize logic-enabled DRAM to improve system performance and energy efficiency. First, we utilize the existing logic layer of a Hybrid Memory Cube (a commercial logic-enabled DRAM product) to (1) dynamically turn off some of its off-chip links by monitoring the actual bandwidth demand and (2) integrate prefetch buffer into main memory to perform aggressive prefetching without consuming off-chip link bandwidth. Second, we propose a scalable accelerator for large-scale graph processing called Tesseract, in which graph processing computation is offloaded to specialized processors inside main memory in order to achieve memory-capacity-proportional performance. Third, we design a low-overhead PIM architecture for near-term adoption called PIM-enabled instructions, where PIM operations are interfaced as cache-coherent, virtually-addressed host processor instructions that can be executed either by the host processor or in main memory depending on the data locality. Fourth, we propose an energy-efficient PIM system called aggregation-in-memory, which can adaptively execute PIM operations at any level of the memory hierarchy and provides a fully automated compiler toolchain that transforms existing applications to use PIM operations without programmer intervention.Chapter 1 Introduction 1
1.1 Inefficiencies in the Current Memory Systems 2
1.1.1 On-Chip Caches 2
1.1.2 Main Memory 2
1.2 New Memory Technologies: Opportunities and Challenges 3
1.2.1 Energy-Efficient On-Chip Caches based on STT-RAM 3
1.2.2 Intelligent Main Memory based on Logic-Enabled DRAM 6
1.3 Dissertation Overview 9
Chapter 2 Previous Work 11
2.1 Energy-Efficient On-Chip Caches based on STT-RAM 11
2.1.1 Hybrid Caches 11
2.1.2 Volatile STT-RAM 13
2.1.3 Redundant Write Elimination 14
2.2 Intelligent Main Memory based on Logic-Enabled DRAM 15
2.2.1 PIM Architectures in the 1990s 15
2.2.2 Modern PIM Architectures based on 3D Stacking 15
2.2.3 Modern PIM Architectures on Memory Dies 17
Chapter 3 Loop-Aware Sleepy Instruction Cache 19
3.1 Architecture 20
3.1.1 Loop Cache 21
3.1.2 Loop-Aware Sleep Controller 22
3.2 Evaluation and Discussion 24
3.2.1 Simulation Environment 24
3.2.2 Energy 25
3.2.3 Performance 27
3.2.4 Sensitivity Analysis 27
3.3 Summary 28
Chapter 4 Lower-Bits Cache 29
4.1 Architecture 29
4.2 Experiments 32
4.2.1 Simulator and Cache Model 32
4.2.2 Results 33
4.3 Summary 34
Chapter 5 Prediction Hybrid Cache 35
5.1 Problem and Motivation 37
5.1.1 Problem Definition 37
5.1.2 Motivation 37
5.2 Write Intensity Predictor 38
5.2.1 Keeping Track of Trigger Instructions 39
5.2.2 Identifying Hot Trigger Instructions 40
5.2.3 Dynamic Set Sampling 41
5.2.4 Summary 42
5.3 Prediction Hybrid Cache 43
5.3.1 Need for Write Intensity Prediction 43
5.3.2 Organization 43
5.3.3 Operations 44
5.3.4 Dynamic Threshold Adjustment 45
5.4 Evaluation Methodology 48
5.4.1 Simulator Configuration 48
5.4.2 Workloads 50
5.5 Single-Core Evaluations 51
5.5.1 Energy Consumption and Speedup 51
5.5.2 Energy Breakdown 53
5.5.3 Coverage and Accuracy 54
5.5.4 Sensitivity to Write Intensity Threshold 55
5.5.5 Impact of Dynamic Set Sampling 55
5.5.6 Results for Non-Write-Intensive Workloads 56
5.6 Multicore Evaluations 57
5.7 Summary 59
Chapter 6 Dead Write Prediction Assisted STT-RAM Cache 61
6.1 Motivation 62
6.1.1 Energy Impact of Inefficient Write Operations 62
6.1.2 Limitations of Existing Approaches 63
6.1.3 Potential of Dead Writes 64
6.2 Dead Write Classification 65
6.2.1 Dead-on-Arrival Fills 65
6.2.2 Dead-Value Fills 66
6.2.3 Closing Writes 66
6.2.4 Decomposition 67
6.3 Dead Write Prediction Assisted STT-RAM Cache Architecture 68
6.3.1 Dead Write Prediction 68
6.3.2 Bidirectional Bypass 71
6.4 Evaluation Methodology 72
6.4.1 Simulation Configuration 72
6.4.2 Workloads 74
6.5 Evaluation for Single-Core Systems 75
6.5.1 Energy Consumption and Speedup 75
6.5.2 Coverage and Accuracy 78
6.5.3 Sensitivity to Signature 78
6.5.4 Sensitivity to Update Policy 80
6.5.5 Implications of Device-/Circuit-Level Techniques for Write Energy Reduction 80
6.5.6 Impact of Prefetching 80
6.6 Evaluation for Multi-Core Systems 81
6.6.1 Energy Consumption and Speedup 81
6.6.2 Application to Inclusive Caches 83
6.6.3 Application to Three-Level Cache Hierarchy 84
6.7 Summary 85
Chapter 7 Link Power Management for Hybrid Memory Cubes 87
7.1 Background and Motivation 88
7.1.1 Hybrid Memory Cube 88
7.1.2 Motivation 89
7.2 HMC Link Power Management 91
7.2.1 Link Delay Monitor 91
7.2.2 Power State Transition 94
7.2.3 Overhead 95
7.3 Two-Level Prefetching 95
7.4 Application to Multi-HMC Systems 97
7.5 Experiments 98
7.5.1 Methodology 98
7.5.2 Link Energy Consumption and Speedup 100
7.5.3 HMC Energy Consumption 102
7.5.4 Runtime Behavior of LPM 102
7.5.5 Sensitivity to Slowdown Threshold 104
7.5.6 LPM without Prefetching 104
7.5.7 Impact of Prefetching on Link Traffic 105
7.5.8 On-Chip Prefetcher Aggressiveness in 2LP 107
7.5.9 Tighter Off-Chip Bandwidth Margin 107
7.5.10 Multithreaded Workloads 108
7.5.11 Multi-HMC Systems 109
7.6 Summary 111
Chapter 8 Tesseract PIM System for Parallel Graph Processing 113
8.1 Background and Motivation 115
8.1.1 Large-Scale Graph Processing 115
8.1.2 Graph Processing on Conventional Systems 117
8.1.3 Processing-in-Memory 118
8.2 Tesseract Architecture 119
8.2.1 Overview 119
8.2.2 Remote Function Call via Message Passing 122
8.2.3 Prefetching 124
8.2.4 Programming Interface 126
8.2.5 Application Mapping 127
8.3 Evaluation Methodology 128
8.3.1 Simulation Configuration 128
8.3.2 Workloads 129
8.4 Evaluation Results 130
8.4.1 Performance 130
8.4.2 Iso-Bandwidth Comparison 133
8.4.3 Execution Time Breakdown 134
8.4.4 Prefetch Efficiency 134
8.4.5 Scalability 135
8.4.6 Effect of Higher Off-Chip Network Bandwidth 136
8.4.7 Effect of Better Graph Distribution 137
8.4.8 Energy/Power Consumption and Thermal Analysis 138
8.5 Summary 139
Chapter 9 PIM-Enabled Instructions 141
9.1 Potential of ISA Extensions as the PIM Interface 143
9.2 PIM Abstraction 145
9.2.1 Operations 145
9.2.2 Memory Model 147
9.2.3 Software Modification 148
9.3 Architecture 148
9.3.1 Overview 148
9.3.2 PEI Computation Unit (PCU) 149
9.3.3 PEI Management Unit (PMU) 150
9.3.4 Virtual Memory Support 153
9.3.5 PEI Execution 153
9.3.6 Comparison with Active Memory Operations 154
9.4 Target Applications for Case Study 155
9.4.1 Large-Scale Graph Processing 155
9.4.2 In-Memory Data Analytics 156
9.4.3 Machine Learning and Data Mining 157
9.4.4 Operation Summary 157
9.5 Evaluation Methodology 158
9.5.1 Simulation Configuration 158
9.5.2 Workloads 159
9.6 Evaluation Results 159
9.6.1 Performance 160
9.6.2 Sensitivity to Input Size 163
9.6.3 Multiprogrammed Workloads 164
9.6.4 Balanced Dispatch: Idea and Evaluation 165
9.6.5 Design Space Exploration for PCUs 165
9.6.6 Performance Overhead of the PMU 167
9.6.7 Energy, Area, and Thermal Issues 167
9.7 Summary 168
Chapter 10 Aggregation-in-Memory 171
10.1 Motivation 173
10.1.1 Rethinking PIM for Energy Efficiency 173
10.1.2 Aggregation as PIM Operations 174
10.2 Architecture 176
10.2.1 Overview 176
10.2.2 Programming Model 177
10.2.3 On-Chip Caches 177
10.2.4 Coherence and Consistency 181
10.2.5 Main Memory 181
10.2.6 Potential Generalization Opportunities 183
10.3 Compiler Support 184
10.4 Contributions over Prior Art 185
10.4.1 PIM-Enabled Instructions 185
10.4.2 Parallel Reduction in Caches 187
10.4.3 Row Buffer Locality of DRAM Writes 188
10.5 Target Applications 188
10.6 Evaluation Methodology 190
10.6.1 Simulation Configuration 190
10.6.2 Hardware Overhead 191
10.6.3 Workloads 192
10.7 Evaluation Results 192
10.7.1 Energy Consumption and Performance 192
10.7.2 Dynamic Energy Breakdown 196
10.7.3 Comparison with Aggressive Writeback 197
10.7.4 Multiprogrammed Workloads 198
10.7.5 Comparison with Intrinsic-based Code 198
10.8 Summary 199
Chapter 11 Conclusion 201
11.1 Energy-Efficient On-Chip Caches based on STT-RAM 202
11.2 Intelligent Main Memory based on Logic-Enabled DRAM 203
Bibliography 205
요약 227Docto