The energy consumption of DRAM is a critical concern in modern computing
systems. Improvements in manufacturing process technology have allowed DRAM
vendors to lower the DRAM supply voltage conservatively, which reduces some of
the DRAM energy consumption. We would like to reduce the DRAM supply voltage
more aggressively, to further reduce energy. Aggressive supply voltage
reduction requires a thorough understanding of the effect voltage scaling has
on DRAM access latency and DRAM reliability.
In this paper, we take a comprehensive approach to understanding and
exploiting the latency and reliability characteristics of modern DRAM when the
supply voltage is lowered below the nominal voltage level specified by DRAM
standards. Using an FPGA-based testing platform, we perform an experimental
study of 124 real DDR3L (low-voltage) DRAM chips manufactured recently by three
major DRAM vendors. We find that reducing the supply voltage below a certain
point introduces bit errors in the data, and we comprehensively characterize
the behavior of these errors. We discover that these errors can be avoided by
increasing the latency of three major DRAM operations (activation, restoration,
and precharge). We perform detailed DRAM circuit simulations to validate and
explain our experimental findings. We also characterize the various
relationships between reduced supply voltage and error locations, stored data
patterns, DRAM temperature, and data retention.
Based on our observations, we propose a new DRAM energy reduction mechanism,
called Voltron. The key idea of Voltron is to use a performance model to
determine by how much we can reduce the supply voltage without introducing
errors and without exceeding a user-specified threshold for performance loss.
Voltron reduces the average system energy by 7.3% while limiting the average
system performance loss to only 1.8%, for a variety of workloads.Comment: 25 pages, 25 figures, 7 tables, Proceedings of the ACM on Measurement
and Analysis of Computing Systems (POMACS