2 research outputs found
Exploiting the DRAM Microarchitecture to Increase Memory-Level Parallelism
This paper summarizes the idea of Subarray-Level Parallelism (SALP) in DRAM,
which was published in ISCA 2012, and examines the work's significance and
future potential. Modern DRAMs have multiple banks to serve multiple memory
requests in parallel. However, when two requests go to the same bank, they have
to be served serially, exacerbating the high latency of on-chip memory. Adding
more banks to the system to mitigate this problem incurs high system cost. Our
goal in this work is to achieve the benefits of increasing the number of banks
with a low-cost approach. To this end, we propose three new mechanisms, SALP-1,
SALP-2, and MASA (Multitude of Activated Subarrays), to reduce the
serialization of different requests that go to the same bank. The key
observation exploited by our mechanisms is that a modern DRAM bank is
implemented as a collection of subarrays that operate largely independently
while sharing few global peripheral structures.
Our three proposed mechanisms mitigate the negative impact of bank
serialization by overlapping different components of the bank access latencies
of multiple requests that go to different subarrays within the same bank.
SALP-1 requires no changes to the existing DRAM structure, and needs to only
reinterpret some of the existing DRAM timing parameters. SALP-2 and MASA
require only modest changes (< 0.15% area overhead) to the DRAM peripheral
structures, which are much less design constrained than the DRAM core. Our
evaluations show that SALP-1, SALP-2 and MASA significantly improve performance
for both single-core systems (7%/13%/17%) and multi-core systems (15%/16%/20%),
averaged across a wide range of workloads. We also demonstrate that our
mechanisms can be combined with application-aware memory request scheduling in
multicore systems to further improve performance and fairness
Reducing DRAM Refresh Overheads with Refresh-Access Parallelism
This article summarizes the idea of "refresh-access parallelism," which was
published in HPCA 2014, and examines the work's significance and future
potential. The overarching objective of our HPCA 2014 paper is to reduce the
significant negative performance impact of DRAM refresh with intelligent memory
controller mechanisms.
To mitigate the negative performance impact of DRAM refresh, our HPCA 2014
paper proposes two complementary mechanisms, DARP (Dynamic Access Refresh
Parallelization) and SARP (Subarray Access Refresh Parallelization). The goal
is to address the drawbacks of state-of-the-art per-bank refresh mechanism by
building more efficient techniques to parallelize refreshes and accesses within
DRAM. First, instead of issuing per-bank refreshes in a round-robin order, as
it is done today, DARP issues per-bank refreshes to idle banks in an
out-of-order manner. Furthermore, DARP proactively schedules refreshes during
intervals when a batch of writes are draining to DRAM. Second, SARP exploits
the existence of mostly-independent subarrays within a bank. With minor
modifications to DRAM organization, it allows a bank to serve memory accesses
to an idle subarray while another subarray is being refreshed. Our extensive
evaluations on a wide variety of workloads and systems show that our mechanisms
improve system performance (and energy efficiency) compared to three
state-of-the-art refresh policies, and their performance bene ts increase as
DRAM density increases.Comment: 9 pages. arXiv admin note: text overlap with arXiv:1712.07754,
arXiv:1601.0635