30 research outputs found
IMP: Indirect Memory Prefetcher
Machine learning, graph analytics and sparse linear algebra-based applications are dominated by irregular memory accesses resulting from following edges in a graph or non-zero elements in a sparse matrix. These accesses have little temporal or spatial locality, and thus incur long memory stalls and large bandwidth requirements. A traditional streaming or striding prefetcher cannot capture these irregular access patterns.
A majority of these irregular accesses come from indirect patterns of the form A[B[i]]. We propose an efficient hardware indirect memory prefetcher (IMP) to capture this access pattern and hide latency. We also propose a partial cacheline accessing mechanism for these prefetches to reduce the network and DRAM bandwidth pressure from the lack of spatial locality.
Evaluated on 7 applications, IMP shows 56% speedup on average (up to 2.3×) compared to a baseline 64 core system with streaming prefetchers. This is within 23% of an idealized system. With partial cacheline accessing, we see another 9.4% speedup on average (up to 46.6%).Intel Science and Technology Center for Big Dat
An Intelligent Framework for Oversubscription Management in CPU-GPU Unified Memory
This paper proposes a novel intelligent framework for oversubscription
management in CPU-GPU UVM. We analyze the current rule-based methods of GPU
memory oversubscription with unified memory, and the current learning-based
methods for other computer architectural components. We then identify the
performance gap between the existing rule-based methods and the theoretical
upper bound. We also identify the advantages of applying machine intelligence
and the limitations of the existing learning-based methods. This paper proposes
a novel intelligent framework for oversubscription management in CPU-GPU UVM.
It consists of an access pattern classifier followed by a pattern-specific
Transformer-based model using a novel loss function aiming for reducing page
thrashing. A policy engine is designed to leverage the model's result to
perform accurate page prefetching and pre-eviction. We evaluate our intelligent
framework on a set of 11 memory-intensive benchmarks from popular benchmark
suites. Our solution outperforms the state-of-the-art (SOTA) methods for
oversubscription management, reducing the number of pages thrashed by 64.4\%
under 125\% memory oversubscription compared to the baseline, while the SOTA
method reduces the number of pages thrashed by 17.3\%. Our solution achieves an
average IPC improvement of 1.52X under 125\% memory oversubscription, and our
solution achieves an average IPC improvement of 3.66X under 150\% memory
oversubscription. Our solution outperforms the existing learning-based methods
for page address prediction, improving top-1 accuracy by 6.45\% (up to 41.2\%)
on average for a single GPGPU workload, improving top-1 accuracy by 10.2\% (up
to 30.2\%) on average for multiple concurrent GPGPU workloads.Comment: arXiv admin note: text overlap with arXiv:2203.1267
Fast Key-Value Lookups with Node Tracker
Lookup operations for in-memory databases are heavily memory bound, because they often rely on pointer-chasing linked data structure traversals. They also have many branches that are hard-to-predict due to random key lookups. In this study, we show that although cache misses are the primary bottleneck for these applications, without a method for eliminating the branch mispredictions only a small fraction of the performance benefit is achieved through prefetching alone. We propose the Node Tracker (NT), a novel programmable prefetcher/pre-execution unit that is highly effective in exploiting inter key-lookup parallelism to improve single-thread performance. We extend NT with branch outcome streaming (BOS) to reduce branch mispredictions and show that this achieves an extra 3× speedup. Finally, we evaluate the NT as a pre-execution unit and demonstrate that we can further improve the performance in both single- and multi-threaded execution modes. Our results show that, on average, NT improves single-thread performance by 4.1× when used as a prefetcher; 11.9× as a prefetcher with BOS; 14.9× as a pre-execution unit and 18.8× as a pre-execution unit with BOS. Finally, with 24 cores of the latter version, we achieve a speedup of 203× and 11× over the single-core and 24-core baselines, respectively