3,685 research outputs found
Energy Saving Techniques for Phase Change Memory (PCM)
In recent years, the energy consumption of computing systems has increased
and a large fraction of this energy is consumed in main memory. Towards this,
researchers have proposed use of non-volatile memory, such as phase change
memory (PCM), which has low read latency and power; and nearly zero leakage
power. However, the write latency and power of PCM are very high and this,
along with limited write endurance of PCM present significant challenges in
enabling wide-spread adoption of PCM. To address this, several
architecture-level techniques have been proposed. In this report, we review
several techniques to manage power consumption of PCM. We also classify these
techniques based on their characteristics to provide insights into them. The
aim of this work is encourage researchers to propose even better techniques for
improving energy efficiency of PCM based main memory.Comment: Survey, phase change RAM (PCRAM
On the Intrinsic Locality Properties of Web Reference Streams
There has been considerable work done in the study of Web reference streams: sequences of requests for Web objects. In particular, many studies have looked at the locality properties of such streams, because of the impact of locality on the design and performance of caching and prefetching systems. However, a general framework for understanding why reference streams exhibit given locality properties has not yet emerged.
In this work we take a first step in this direction, based on viewing the Web as a set of reference streams that are transformed by Web components (clients, servers, and intermediaries). We propose a graph-based framework for describing this collection of streams and components. We identify three basic stream transformations that occur at nodes of the graph: aggregation, disaggregation and filtering, and we show how these transformations can be used to abstract the effects of different Web components on their associated reference streams. This view allows a structured approach to the analysis of why reference streams show given properties at different points in the Web.
Applying this approach to the study of locality requires good metrics for locality. These metrics must meet three criteria: 1) they must accurately capture temporal locality; 2) they must be independent of trace artifacts such as trace length; and 3) they must not involve manual procedures or model-based assumptions. We describe two metrics meeting these criteria that each capture a different kind of temporal locality in reference streams. The popularity component of temporal locality is captured by entropy, while the correlation component is captured by interreference coefficient of variation. We argue that these metrics are more natural and more useful than previously proposed metrics for temporal locality.
We use this framework to analyze a diverse set of Web reference traces. We find that this framework can shed light on how and why locality properties vary across different locations in the Web topology. For example, we find that filtering and aggregation have opposing effects on the popularity component of the temporal locality, which helps to explain why multilevel caching can be effective in the Web. Furthermore, we find that all transformations tend to diminish the correlation component of temporal locality, which has implications for the utility of different cache replacement policies at different points in the Web.National Science Foundation (ANI-9986397, ANI-0095988); CNPq-Brazi
Evaluating Built-in ECC of FPGA on-chip Memories for the Mitigation of Undervolting Faults
Voltage underscaling below the nominal level is an effective solution for
improving energy efficiency in digital circuits, e.g., Field Programmable Gate
Arrays (FPGAs). However, further undervolting below a safe voltage level and
without accompanying frequency scaling leads to timing related faults,
potentially undermining the energy savings. Through experimental voltage
underscaling studies on commercial FPGAs, we observed that the rate of these
faults exponentially increases for on-chip memories, or Block RAMs (BRAMs). To
mitigate these faults, we evaluated the efficiency of the built-in
Error-Correction Code (ECC) and observed that more than 90% of the faults are
correctable and further 7% are detectable (but not correctable). This
efficiency is the result of the single-bit type of these faults, which are then
effectively covered by the Single-Error Correction and Double-Error Detection
(SECDED) design of the built-in ECC. Finally, motivated by the above
experimental observations, we evaluated an FPGA-based Neural Network (NN)
accelerator under low-voltage operations, while built-in ECC is leveraged to
mitigate undervolting faults and thus, prevent NN significant accuracy loss. In
consequence, we achieve 40% of the BRAM power saving through undervolting below
the minimum safe voltage level, with a negligible NN accuracy loss, thanks to
the substantial fault coverage by the built-in ECC.Comment: 6 pages, 2 figure
- …