288 research outputs found

    Multithreading Aware Hardware Prefetching for Chip Multiprocessors

    Get PDF
    To take advantage of the processing power in the Chip Multiprocessors design, applications must be divided into semi-independent processes that can run concur- rently on multiple cores within a system. Therefore, programmers must insert thread synchronization semantics (i.e. locks, barriers, and condition variables) to synchro- nize data access between processes. Indeed, threads spend long time waiting to acquire the lock of a critical section. In addition, a processor has to stall execution to wait for load data accesses to complete. Furthermore, there are often independent instructions which include load instructions beyond synchronization semantics that could be executed in parallel while a thread waits on the synchronization semantics. The conveniences of the cache memories come with some extra cost in Chip Multiprocessors. Cache Coherence mechanisms address the Memory Consistency problem. However, Cache Coherence adds considerable overhead to memory accesses. Having aggressive prefetcher on different cores of a Chip Multiprocessor can definitely lead to significant system performance degradation when running multi-threaded applications. This result of prefetch-demand interference when a prefetcher in one core ends up pulling shared data from a producing core before it has been written, the cache block will end up transitioning back and forth between the cores and result in useless prefetch, saturating the memory bandwidth and substantially increase the latency to critical shared data. We present a hardware prefetcher that enables large performance improvements from prefetching in Chip Multiprocessors by significantly reducing prefetch-demand interference. Furthermore, it will utilize the time that a thread spends waiting on syn- chronization semantics to run ahead of the critical section to speculate and prefetch independent load instruction data beyond the synchronization semantics

    Cost-effective compiler directed memory prefetching and bypassing

    Get PDF
    Ever increasing memory latencies and deeper pipelines push memory farther from the processor. Prefetching techniques aim is to bridge these two gaps by fetching data in advance to both the L1 cache and the register file. Our main contribution in this paper is a hybrid approach to the prefetching problem that combines both software and hardware prefetching in a cost-effective way by needing very little hardware support and impacting minimally the design of the processor pipeline. The prefetcher is built on-top of a static memory instruction bypassing, which is in charge of bringing prefetched values in the register file. In this paper we also present a thorough analysis of the limits of both prefetching and memory instruction bypassing. We also compare our prefetching technique with a prior speculative proposal that attacked the same problem, and we show that at much lower cost, our hybrid solution is better than a realistic implementation of speculative prefetching and bypassing. On average, our hybrid implementation achieves a 13% speed-up improvement over a version with software prefetching in a subset of numerical applications and an average of 43% over a version with no software prefetching (achieving up to a 102% for specific benchmarks).Peer ReviewedPostprint (published version

    A Survey of Techniques for Architecting TLBs

    Get PDF
    “Translation lookaside buffer” (TLB) caches virtual to physical address translation information and is used in systems ranging from embedded devices to high-end servers. Since TLB is accessed very frequently and a TLB miss is extremely costly, prudent management of TLB is important for improving performance and energy efficiency of processors. In this paper, we present a survey of techniques for architecting and managing TLBs. We characterize the techniques across several dimensions to highlight their similarities and distinctions. We believe that this paper will be useful for chip designers, computer architects and system engineers

    Compiler Assisted Cache Prefetch Using Procedure Call Hierarchy

    Get PDF
    Microprocessor performance has been increasing at an exponential rate while memory system performance improved at a linear rate. This widening difference in performances is increasingly rendering advances in computer architecture less useful as more instructions spend more time waiting for data to be fetched from the memory after a cache miss. Data prefetching is a technique that avoids some cache misses by bringing data into the cache before it is actually needed. Different approaches to data prefetching have been developed, however existing prefetch schemes do not eliminate all cache misses and even with smaller cache miss ratio, miss latency remains an important performance limiter. In this thesis, we propose a technique called Compiler Assisted Cache Prefetch Using Procedure Call Hierarchy (CAPPH). It is a hardware-software prefetch technique that uses a compiler to provide information pertaining to data structure layout, data-flow and procedure-call hierarchy of the program to a mechanism that prefetches linked data structures (LDS). It can prefetch data for procedures even before they are called by using this statically generated information. It is also capable of issuing prefetches for recursive functions that access LDS and arbitrary access sequences which are otherwise difficult to prefetch. The scheme is simulated using RSIML, a SPARC v8 simulator. Benchmarks em3d, health and mst from the Olden suite were used. The scheme was compared with an otherwise identical system with no prefetch and one using sequential prefetch. Simulations were performed to measure CAPPH performance and the decrease in the miss ratio of loads accessing LDS. Statistics of individual loads were collected, and accuracy, coverage and timeliness were measured against varying cache size and latency. Results from individual loads accessing linked data structures show considerable decrease in their miss ratios and average access times. CAPPH is found to be more accurate than sequential prefetch. The coverage and timeliness are lower in CAPPH than in sequential prefetch. We suggest heuristics to further enhance the effectiveness of the prefetch technique

    Best-Offset Hardware Prefetching

    Get PDF
    International audienceHardware prefetching is an important feature of modern high-performance processors. When the application working set is too large to fit in on-chip caches, disabling hardware prefetchers may result in severe performance reduction. A new prefetcher was recently introduced, the Sandbox prefetcher, that tries to find dynamically the best prefetch offset using the sandbox method. The Sandbox prefetcher uses simple hardware and was shown to be quite effective. However, the sandbox method does not take into account prefetch timeliness. We propose an offset prefetcher with a new method for selecting the prefetch offset that takes into account prefetch timeliness. We show that our Best-Offset prefetcher outperforms the Sandbox prefetcher on the SPEC CPU2006 benchmarks , with equally simple hardware
    corecore