228 research outputs found

    Randomized cache placement for eliminating conflicts

    Get PDF
    Applications with regular patterns of memory access can experience high levels of cache conflict misses. In shared-memory multiprocessors conflict misses can be increased significantly by the data transpositions required for parallelization. Techniques such as blocking which are introduced within a single thread to improve locality, can result in yet more conflict misses. The tension between minimizing cache conflicts and the other transformations needed for efficient parallelization leads to complex optimization problems for parallelizing compilers. This paper shows how the introduction of a pseudorandom element into the cache index function can effectively eliminate repetitive conflict misses and produce a cache where miss ratio depends solely on working set behavior. We examine the impact of pseudorandom cache indexing on processor cycle times and present practical solutions to some of the major implementation issues for this type of cache. Our conclusions are supported by simulations of a superscalar out-of-order processor executing the SPEC95 benchmarks, as well as from cache simulations of individual loop kernels to illustrate specific effects. We present measurements of instructions committed per cycle (IPC) when comparing the performance of different cache architectures on whole-program benchmarks such as the SPEC95 suite.Peer ReviewedPostprint (published version

    Improving the Performance and Energy Efficiency of GPGPU Computing through Adaptive Cache and Memory Management Techniques

    Get PDF
    Department of Computer Science and EngineeringAs the performance and energy efficiency requirement of GPGPUs have risen, memory management techniques of GPGPUs have improved to meet the requirements by employing hardware caches and utilizing heterogeneous memory. These techniques can improve GPGPUs by providing lower latency and higher bandwidth of the memory. However, these methods do not always guarantee improved performance and energy efficiency due to the small cache size and heterogeneity of the memory nodes. While prior works have proposed various techniques to address this issue, relatively little work has been done to investigate holistic support for memory management techniques. In this dissertation, we analyze performance pathologies and propose various techniques to improve memory management techniques. First, we investigate the effectiveness of advanced cache indexing (ACI) for high-performance and energy-efficient GPGPU computing. Specifically, we discuss the designs of various static and adaptive cache indexing schemes and present implementation for GPGPUs. We then quantify and analyze the effectiveness of the ACI schemes based on a cycle-accurate GPGPU simulator. Our quantitative evaluation shows that ACI schemes achieve significant performance and energy-efficiency gains over baseline conventional indexing scheme. We also analyze the performance sensitivity of ACI to key architectural parameters (i.e., capacity, associativity, and ICN bandwidth) and the cache indexing latency. We also demonstrate that ACI continues to achieve high performance in various settings. Second, we propose IACM, integrated adaptive cache management for high-performance and energy-efficient GPGPU computing. Based on the performance pathology analysis of GPGPUs, we integrate state-of-the-art adaptive cache management techniques (i.e., cache indexing, bypassing, and warp limiting) in a unified architectural framework to eliminate performance pathologies. Our quantitative evaluation demonstrates that IACM significantly improves the performance and energy efficiency of various GPGPU workloads over the baseline architecture (i.e., 98.1% and 61.9% on average, respectively) and achieves considerably higher performance than the state-of-the-art technique (i.e., 361.4% at maximum and 7.7% on average). Furthermore, IACM delivers significant performance and energy efficiency gains over the baseline GPGPU architecture even when enhanced with advanced architectural technologies (e.g., higher capacity, associativity). Third, we propose bandwidth- and latency-aware page placement (BLPP) for GPGPUs with heterogeneous memory. BLPP analyzes the characteristics of a application and determines the optimal page allocation ratio between the GPU and CPU memory. Based on the optimal page allocation ratio, BLPP dynamically allocate pages across the heterogeneous memory nodes. Our experimental results show that BLPP considerably outperforms the baseline and state-of-the-art technique (i.e., 13.4% and 16.7%) and performs similar to the static-best version (i.e., 1.2% difference), which requires extensive offline profiling.clos

    Exploring Alternate Cache Indexing Techniques

    Get PDF
    Cache memory is a bridging component which covers the increasing gap between the speed of a processor and main memory. An excellent performance of the cache is crucial to improve system performance. Conflict misses are one of the critical reasons that limit the cache performance by mapping blocks to the same set which results in the eviction of many blocks. However, many blocks in the cache sets are not mapped, and thus the available space is not efficiently utilized. A direct way to reduce conflict misses is to increase associativity, but this comes with the cost of an increase in the hit time. Another way to reduce conflict misses is to change the cache-indexing scheme and distribute the accesses across all sets. This thesis focuses on the second way mentioned above and aims to evaluate the impact of the matrix-based indexing scheme on cache performance against the traditional modulus-based indexing scheme. A correlation between the proposed indexing scheme and different cache replacement policies is also observed. The matrix-based indexing scheme yields a geometric mean speedup of 1.2% for SPEC CPU 2017 benchmarks for single core simulations when applied for direct-mapped last level cache. In this case, an improvement of 1.5% and 4% is observed for at least eighteen and seven of SPEC CPU2017 applications respectively. Also, it yields 2% of performance improvement over sixteen SPEC CPU2006 benchmarks. The new indexing scheme correlates well with multiperspective reuse prediction. It is observed that LRU benefits machine learning benchmark by a performance of 5.1%. For multicore simulations, the new indexing scheme does not improve performance significantly. However, this scheme also does not impact the application’s performance negatively

    Exploring Alternate Cache Indexing Techniques

    Get PDF
    Cache memory is a bridging component which covers the increasing gap between the speed of a processor and main memory. An excellent performance of the cache is crucial to improve system performance. Conflict misses are one of the critical reasons that limit the cache performance by mapping blocks to the same set which results in the eviction of many blocks. However, many blocks in the cache sets are not mapped, and thus the available space is not efficiently utilized. A direct way to reduce conflict misses is to increase associativity, but this comes with the cost of an increase in the hit time. Another way to reduce conflict misses is to change the cache-indexing scheme and distribute the accesses across all sets. This thesis focuses on the second way mentioned above and aims to evaluate the impact of the matrix-based indexing scheme on cache performance against the traditional modulus-based indexing scheme. A correlation between the proposed indexing scheme and different cache replacement policies is also observed. The matrix-based indexing scheme yields a geometric mean speedup of 1.2% for SPEC CPU 2017 benchmarks for single core simulations when applied for direct-mapped last level cache. In this case, an improvement of 1.5% and 4% is observed for at least eighteen and seven of SPEC CPU2017 applications respectively. Also, it yields 2% of performance improvement over sixteen SPEC CPU2006 benchmarks. The new indexing scheme correlates well with multiperspective reuse prediction. It is observed that LRU benefits machine learning benchmark by a performance of 5.1%. For multicore simulations, the new indexing scheme does not improve performance significantly. However, this scheme also does not impact the application’s performance negatively

    Exploring Alternate Cache Indexing Techniques

    Get PDF
    Cache memory is a bridging component which covers the increasing gap between the speed of a processor and main memory. An excellent performance of the cache is crucial to improve system performance. Conflict misses are one of the critical reasons that limit the cache performance by mapping blocks to the same set which results in the eviction of many blocks. However, many blocks in the cache sets are not mapped, and thus the available space is not efficiently utilized. A direct way to reduce conflict misses is to increase associativity, but this comes with the cost of an increase in the hit time. Another way to reduce conflict misses is to change the cache-indexing scheme and distribute the accesses across all sets. This thesis focuses on the second way mentioned above and aims to evaluate the impact of the matrix-based indexing scheme on cache performance against the traditional modulus-based indexing scheme. A correlation between the proposed indexing scheme and different cache replacement policies is also observed. The matrix-based indexing scheme yields a geometric mean speedup of 1.2% for SPEC CPU 2017 benchmarks for single core simulations when applied for direct-mapped last level cache. In this case, an improvement of 1.5% and 4% is observed for at least eighteen and seven of SPEC CPU2017 applications respectively. Also, it yields 2% of performance improvement over sixteen SPEC CPU2006 benchmarks. The new indexing scheme correlates well with multiperspective reuse prediction. It is observed that LRU benefits machine learning benchmark by a performance of 5.1%. For multicore simulations, the new indexing scheme does not improve performance significantly. However, this scheme also does not impact the application’s performance negatively

    Architecting Secure Processor Caches

    Get PDF
    Caches in modern processors enable fast access to data and help alleviate the performance overheads from slow access to DRAM main-memory. While sharing of cache resources between multiple cores, especially the last-level cache, boosts cache utilization and improves system performance, it has been shown to cause serious security vulnerabilities in the form cache side-channel attacks. Different cores of a system can simultaneously run sensitive and malicious applications which can contend for the shared cache space. As a result, accesses of a sensitive application can influence the cache utilization and the execution time of a malicious application, introducing a side-channel of information leakage. Such cache interactions between a sensitive victim and a malicious spy have been shown to allow leakage of encryption keys, user-sensitive data such as files or browsing histories, confidential intellectual property such as machine-learning models, etc. Similarly, such cache interactions can also be used as a channel for covert communication be- tween two colluding malicious applications, when direct communication via network ports is disabled. The focus of this thesis is to develop principled and practical mitigation for such cache side channel and covert channel attacks. To develop principled defenses, it is necessary to develop a deep understanding of attacks. So, first, this thesis investigates the capabilities of attackers and in the process develops a new cache covert channel attack called Streamline, which is considerably faster than current state-of-the-art attacks, with fewer requirements. With an asynchronous and flushless information transmission protocol, Streamline reaches bit-rates of more than 1 MB/s while being applicable to all ISAs and micro-architectures. This demonstrates the need for effective defenses against cache attacks across all platforms. Second, this thesis develops new principled and practical defenses utilizing cache lo- cation randomization. Randomized caches obfuscate the mappings of addresses to cache locations to prevent malicious programs from inferring contention patterns on shared last- level caches with victim programs. However, successive defenses relying on randomization have been broken by recent attacks. To end the arms race in randomized caches, this thesis proposes a principled defense, MIRAGE, which provides the security of a fully-associative design in a practical manner for randomized caches. This eliminates set-conflicts and set- conflict based cache attacks in a future-proof manner. Third, this thesis explores cache-partitioning based defenses to eliminate all potential cache side channels through shared last-level caches. Such defenses map mistrusting applications to isolated cache partitions, thus preventing any information leakage across applications through cache state changes. However, existing solutions are not scalable or do not allow flexible usage of DRAM and cache resources. To address these problems, this thesis provides a scalable and flexible cache-isolation framework, Bespoke Cache Enclaves, supporting hundreds of partitions independent of memory utilization. This work enables practical adoption of cache-isolation defenses against cache side-channel attacks. Lastly, this thesis develops techniques to secure caches against exploitation in transient execution attacks. Attacks like Spectre and Meltdown exploit processor speculation to illegally access secrets and leak these out through cache covert channels, i.e., making transient changes to processor caches. This thesis enables CleanupSpec, one of the first defenses against such attacks, which reverses speculative modifications to caches on mis- speculations, to limit such transient information leakage via caches. This solution prevents caches from being exploited by attacks like Spectre with minimal overheads. Overall, this thesis enables several techniques that provide principled yet practical security for processor caches against side channels and covert channels. These techniques can potentially enable the wide adoption of secure cache designs in future processors and support efforts to enable confidential computing in systems.Ph.D

    The design and performance of a conflict-avoiding cache

    Get PDF
    High performance architectures depend heavily on efficient multi-level memory hierarchies to minimize the cost of accessing data. This dependence will increase with the expected increases in relative distance to main memory. There have been a number of published proposals for cache conflict-avoidance schemes. We investigate the design and performance of conflict-avoiding cache architectures based on polynomial modulus functions, which earlier research has shown to be highly effective at reducing conflict miss ratios. We examine a number of practical implementation issues and present experimental evidence to support the claim that pseudo-randomly indexed caches are both effective in performance terms and practical from an implementation viewpoint.Peer Reviewe

    Reducing Cache Contention On GPUs

    Get PDF
    The usage of Graphics Processing Units (GPUs) as an application accelerator has become increasingly popular because, compared to traditional CPUs, they are more cost-effective, their highly parallel nature complements a CPU, and they are more energy efficient. With the popularity of GPUs, many GPU-based compute-intensive applications (a.k.a., GPGPUs) present significant performance improvement over traditional CPU-based implementations. Caches, which significantly improve CPU performance, are introduced to GPUs to further enhance application performance. However, the effect of caches is not significant for many cases in GPUs and even detrimental for some cases. The massive parallelism of the GPU execution model and the resulting memory accesses cause the GPU memory hierarchy to suffer from significant memory resource contention among threads. One cause of cache contention arises from column-strided memory access patterns that GPU applications commonly generate in many data-intensive applications. When such access patterns are mapped to hardware thread groups, they become memory-divergent instructions whose memory requests are not GPU hardware friendly, resulting in serialized access and performance degradation. Cache contention also arises from cache pollution caused by lines with low reuse. For the cache to be effective, a cached line must be reused before its eviction. Unfortunately, the streaming characteristic of GPGPU workloads and the massively parallel GPU execution model increase the reuse distance, or equivalently reduce reuse frequency of data. In a GPU, the pollution caused by a large reuse distance data is significant. Memory request stall is another contention factor. A stalled Load/Store (LDST) unit does not execute memory requests from any ready warps in the issue stage. This stall prevents the potential hit chances for the ready warps. This dissertation proposes three novel architectural modifications to reduce the contention: 1) contention-aware selective caching detects the memory-divergent instructions caused by the column-strided access patterns, calculates the contending cache sets and locality information and then selectively caches; 2) locality-aware selective caching dynamically calculates the reuse frequency with efficient hardware and caches based on the reuse frequency; and 3) memory request scheduling queues the memory requests from a warp issuing stage, frees the LDST unit stall and schedules items from the queue to the LDST unit by multiple probing of the cache. Through systematic experiments and comprehensive comparisons with existing state-of-the-art techniques, this dissertation demonstrates the effectiveness of our aforementioned techniques and the viability of reducing cache contention through architectural support. Finally, this dissertation suggests other promising opportunities for future research on GPU architecture
    • 

    corecore