185 research outputs found
Coset Coding to Extend the Lifetime of Non-Volatile Memory
<p>Modern computing systems are increasingly integrating both Phase Change Memory (PCM) and Flash memory technologies into computer systems being developed today, yet the lifetime of these technologies is limited by the number of times cells are written. Due to their limited lifetime, PCM and Flash may wear-out before other parts of the system. The objective of this dissertation is to increase the lifetime of memory locations composed of either PCM or Flash cells using coset coding. </p><p>For PCM, we extend memory lifetime by using coset coding to reduce the number of bit-flips per write compared to un-coded writes. Flash program/erase operation cycle degrades page lifetime; we extend the lifetime of Flash memory cells by using coset coding to re-program a page multiple times without erasing. We then show how coset coding can be integrated into Flash solid state drives.</p><p>We ran simulations to evaluate the effectiveness of using coset coding to extend PCM and Flash lifetime. We simulated writes to PCM and found that in our simulations coset coding can be used to increase PCM lifetime by up to 3x over writing un-coded data directly to the memory location. We extended the lifetime of Flash using coset coding to re-write pages without an intervening erase and were able to re-write a single Flash page using coset coding more times than when writing un-coded data or using prior coding work for the same area overhead. We also found in our simulations that using coset coding in a Flash SSD results in higher lifetime for a given area overhead compared to un-coded writes.</p>Dissertatio
Enabling Fine-Grain Restricted Coset Coding Through Word-Level Compression for PCM
Phase change memory (PCM) has recently emerged as a promising technology to
meet the fast growing demand for large capacity memory in computer systems,
replacing DRAM that is impeded by physical limitations. Multi-level cell (MLC)
PCM offers high density with low per-byte fabrication cost. However, despite
many advantages, such as scalability and low leakage, the energy for
programming intermediate states is considerably larger than programing
single-level cell PCM. In this paper, we study encoding techniques to reduce
write energy for MLC PCM when the encoding granularity is lowered below the
typical cache line size. We observe that encoding data blocks at small
granularity to reduce write energy actually increases the write energy because
of the auxiliary encoding bits. We mitigate this adverse effect by 1) designing
suitable codeword mappings that use fewer auxiliary bits and 2) proposing a new
Word-Level Compression (WLC) which compresses more than 91% of the memory lines
and provides enough room to store the auxiliary data using a novel restricted
coset encoding applied at small data block granularities.
Experimental results show that the proposed encoding at 16-bit data
granularity reduces the write energy by 39%, on average, versus the leading
encoding approach for write energy reduction. Furthermore, it improves
endurance by 20% and is more reliable than the leading approach. Hardware
synthesis evaluation shows that the proposed encoding can be implemented
on-chip with only a nominal area overhead.Comment: 12 page
Compressed Encoding for Rank Modulation
Rank modulation has been recently proposed as
a scheme for storing information in flash memories. While
rank modulation has advantages in improving write speed and
endurance, the current encoding approach is based on the "push
to the top" operation that is not efficient in the general case. We
propose a new encoding procedure where a cell level is raised to
be higher than the minimal necessary subset -instead of all - of
the other cell levels. This new procedure leads to a significantly
more compressed (lower charge levels) encoding. We derive an
upper bound for a family of codes that utilize the proposed
encoding procedure, and consider code constructions that achieve
that bound for several special cases
When Do WOM Codes Improve the Erasure Factor in Flash Memories?
Flash memory is a write-once medium in which reprogramming cells requires
first erasing the block that contains them. The lifetime of the flash is a
function of the number of block erasures and can be as small as several
thousands. To reduce the number of block erasures, pages, which are the
smallest write unit, are rewritten out-of-place in the memory. A Write-once
memory (WOM) code is a coding scheme which enables to write multiple times to
the block before an erasure. However, these codes come with significant rate
loss. For example, the rate for writing twice (with the same rate) is at most
0.77.
In this paper, we study WOM codes and their tradeoff between rate loss and
reduction in the number of block erasures, when pages are written uniformly at
random. First, we introduce a new measure, called erasure factor, that reflects
both the number of block erasures and the amount of data that can be written on
each block. A key point in our analysis is that this tradeoff depends upon the
specific implementation of WOM codes in the memory. We consider two systems
that use WOM codes; a conventional scheme that was commonly used, and a new
recent design that preserves the overall storage capacity. While the first
system can improve the erasure factor only when the storage rate is at most
0.6442, we show that the second scheme always improves this figure of merit.Comment: to be presented at ISIT 201
Rewriting Codes for Joint Information Storage in Flash Memories
Memories whose storage cells transit irreversibly between
states have been common since the start of the data storage
technology. In recent years, flash memories have become a very
important family of such memories. A flash memory cell has q
states—state 0.1.....q-1 - and can only transit from a lower
state to a higher state before the expensive erasure operation takes
place. We study rewriting codes that enable the data stored in a
group of cells to be rewritten by only shifting the cells to higher
states. Since the considered state transitions are irreversible, the
number of rewrites is bounded. Our objective is to maximize the
number of times the data can be rewritten. We focus on the joint
storage of data in flash memories, and study two rewriting codes
for two different scenarios. The first code, called floating code, is for
the joint storage of multiple variables, where every rewrite changes
one variable. The second code, called buffer code, is for remembering
the most recent data in a data stream. Many of the codes
presented here are either optimal or asymptotically optimal. We
also present bounds to the performance of general codes. The results
show that rewriting codes can integrate a flash memory’s
rewriting capabilities for different variables to a high degree
Time-Space Constrained Codes for Phase-Change Memories
Phase-change memory (PCM) is a promising non-volatile solid-state memory
technology. A PCM cell stores data by using its amorphous and crystalline
states. The cell changes between these two states using high temperature.
However, since the cells are sensitive to high temperature, it is important,
when programming cells, to balance the heat both in time and space.
In this paper, we study the time-space constraint for PCM, which was
originally proposed by Jiang et al. A code is called an
\emph{-constrained code} if for any consecutive
rewrites and for any segment of contiguous cells, the total rewrite
cost of the cells over those rewrites is at most . Here,
the cells are binary and the rewrite cost is defined to be the Hamming distance
between the current and next memory states. First, we show a general upper
bound on the achievable rate of these codes which extends the results of Jiang
et al. Then, we generalize their construction for -constrained codes and show another construction for -constrained codes. Finally, we show that these two
constructions can be used to construct codes for all values of ,
, and
Leveraging data compression for performance-efficient and long-lasting NVM-based last-level cache
Non-volatile memory (NVM) technologies are interesting alternatives for building on-chip Last-Level Caches (LLCs). Their advantages, compared to SRAM memory, are higher density and lower static power, but each write operation slightly wears out the bitcell, to the point of losing its storage capacity. In this context, this paper summarizes three contributions to the state-of-the-art NVM-based LLCs.
Data compression reduces the size of the blocks and, together with wear-leveling mechanisms, can defer the wear out NVMs. Moreover, as capacity is reduced by write wear, data compression enables degraded cache frames to allocate blocks whose compressed size is adequate. Our first contribution is a microarchitecture design that leverages data compression and an intra-frame wear-leveling to gracefully deal with NVM-LLCs capacity degradation. The second contribution leverages this microarchitecture design to propose new insertion policies for hybrid LLCs using Set Dueling and taking into account the compression capabilities of the blocks. From a methodological point of view, although different approaches are used in the literature to analyze the degradation of a NV-LLC, none of them allows to study in detail its temporal evolution. In this sense, the third contribution is a forecasting procedure that combines detailed simulation and prediction, enabling an accurate analysis of different cache content mechanisms (replacement, wear leveling, compression, etc.) on the temporal evolution of the performance of multiprocessor systems employing such NVM-LLCs.
Using this forecasting procedure we show that the proposed NVM-LLCs organizations and the insertion policies for hybrid LLCs significantly outperform the state-of-the-art in both performance and lifetime metrics.Peer ReviewedPostprint (published version
- …