4 research outputs found
Privacy Protection Cache Policy on Hybrid Main Memory
We firstly suggest privacy protection cache policy applying the duty to
delete personal information on a hybrid main memory system. This cache policy
includes generating random data and overwriting the random data into the
personal information. Proposed cache policy is more economical and effective
regarding perfect deletion of data.Comment: 2 pages, 3 figures, IEEE Transactions on Very Large Scale Integration
Systems. arXiv admin note: text overlap with arXiv:1707.0284
Voltron: Understanding and Exploiting the Voltage-Latency-Reliability Trade-Offs in Modern DRAM Chips to Improve Energy Efficiency
This paper summarizes our work on experimental characterization and analysis
of reduced-voltage operation in modern DRAM chips, which was published in
SIGMETRICS 2017, and examines the work's significance and future potential.
We take a comprehensive approach to understanding and exploiting the latency
and reliability characteristics of modern DRAM when the DRAM supply voltage is
lowered below the nominal voltage level specified by DRAM standards. We perform
an experimental study of 124 real DDR3L (low-voltage) DRAM chips manufactured
recently by three major DRAM vendors. We find that reducing the supply voltage
below a certain point introduces bit errors in the data, and we comprehensively
characterize the behavior of these errors. We discover that these errors can be
avoided by increasing the latency of three major DRAM operations (activation,
restoration, and precharge). We perform detailed DRAM circuit simulations to
validate and explain our experimental findings. We also characterize the
various relationships between reduced supply voltage and error locations,
stored data patterns, DRAM temperature, and data retention.
Based on our observations, we propose a new DRAM energy reduction mechanism,
called Voltron. The key idea of Voltron is to use a performance model to
determine by how much we can reduce the supply voltage without introducing
errors and without exceeding a user-specified threshold for performance loss.
Our evaluations show that Voltron reduces the average DRAM and system energy
consumption by 10.5% and 7.3%, respectively, while limiting the average system
performance loss to only 1.8%, for a variety of memory-intensive quad-core
workloads. We also show that Voltron significantly outperforms prior dynamic
voltage and frequency scaling mechanisms for DRAM
Predictable Performance and Fairness Through Accurate Slowdown Estimation in Shared Main Memory Systems
This paper summarizes the ideas and key concepts in MISE (Memory
Interference-induced Slowdown Estimation), which was published in HPCA 2013
[97], and examines the work's significance and future potential. Applications
running concurrently on a multicore system interfere with each other at the
main memory. This interference can slow down different applications
differently. Accurately estimating the slowdown of each application in such a
system can enable mechanisms that can enforce quality-of-service. While much
prior work has focused on mitigating the performance degradation due to
inter-application interference, there is little work on accurately estimating
slowdown of individual applications in a multi-programmed environment. Our goal
is to accurately estimate application slowdowns, towards providing predictable
performance.
To this end, we first build a simple Memory Interference-induced Slowdown
Estimation (MISE) model, which accurately estimates slowdowns caused by memory
interference. We then leverage our MISE model to develop two new memory
scheduling schemes: 1) one that provides soft quality-of-service guarantees,
and 2) another that explicitly attempts to minimize maximum slowdown (i.e.,
unfairness) in the system. Evaluations show that our techniques perform
significantly better than state-of-the-art memory scheduling approaches to
address the same problems.
Our proposed model and techniques have enabled significant research in the
development of accurate performance models [35, 59, 98, 110] and interference
management mechanisms [66, 99, 100, 108, 119, 120]
Heterogeneous-Reliability Memory: Exploiting Application-Level Memory Error Tolerance
This paper summarizes our work on characterizing application memory error
vulnerability to optimize datacenter cost via Heterogeneous-Reliability Memory
(HRM), which was published in DSN 2014, and examines the work's significance
and future potential. Memory devices represent a key component of datacenter
total cost of ownership (TCO), and techniques used to reduce errors that occur
on these devices increase this cost. Existing approaches to providing
reliability for memory devices pessimistically treat all data as equally
vulnerable to memory errors. Our key insight is that there exists a diverse
spectrum of tolerance to memory errors in new data-intensive applications, and
that traditional one-size-fits-all memory reliability techniques are
inefficient in terms of cost. This presents an opportunity to greatly reduce
server hardware cost by provisioning the right amount of memory reliability for
different applications.
Toward this end, in our DSN 2014 paper, we make three main contributions to
enable highly-reliable servers at low datacenter cost. First, we develop a new
methodology to quantify the tolerance of applications to memory errors. Second,
using our methodology, we perform a case study of three new data-intensive
workloads (an interactive web search application, an in-memory key--value
store, and a graph mining framework) to identify new insights into the nature
of application memory error vulnerability. Third, based on our insights, we
propose several new hardware/software heterogeneous-reliability memory system
designs to lower datacenter cost while achieving high reliability and discuss
their trade-offs. We show that our new techniques can reduce server hardware
cost by 4.7% while achieving 99.90% single server availability.Comment: 4 pages, 4 figures, summary report for DSN 2014 paper:
"Characterizing Application Memory Error Vulnerability to Optimize Datacenter
Cost via Heterogeneous-Reliability Memory