7 research outputs found

    Codes for Limited Magnitude Error Correction in Multilevel Cell Memories

    Get PDF
    Multilevel cell (MLC) memories have been advocated for increasing density at low cost in next generation memories. However, the feature of several bits in a cell reduces the distance between levels; this reduced margin makes such memories more vulnerable to defective phenomena and parameter variations, leading to an error in stored data. These errors typically are of limited magnitude, because the induced change causes the stored value to exceed only a few of the level boundaries. To protect these memories from such errors and ensure that the stored data is not corrupted, Error Correction Codes (ECCs) are commonly used. However, most existing codes have been designed to protect memories in which each cell stores a bit and thus, they are not efficient to protect MLC memories. In this paper, an efficient scheme that can correct up to magnitude-3 errors is presented and evaluated. The scheme is based by combining ECCs that are commonly used to protect traditional memories. In particular, Interleaved Parity (IP) bits and Single Error Correction and Double Adjacent Error Correction (SEC-DAEC) codes are utilized; both these codes are combined in the proposed IP-DAEC scheme to efficiently provide a strong coding function for correction, thus exceeding the capabilities of most existing coding schemes for limited magnitude errors. The SEC-DAEC code is used to detect the cell in error and correct some bits, while the IP bits identify the remaining erroneous bits in the memory cell. The use of these simple codes results in an efficient implementation of the decoder compared to existing techniques as shown by the evaluation results presented in this paper. The proposed scheme is also competitive in terms of number of parity check bits and memory redundancy. Therefore, the proposed IP-DAEC scheme is a very efficient alternative to protect and correct MLC memories from limited magnitude errors.Pedro Reviriego was partially supported by the TEXEO project (TEC2016-80339-R) funded by the Spanish Research Plan and by the Madrid Community research project TAPIR-CM grant no. P2018/TCS-4496

    Algorithms and Data Representations for Emerging Non-Volatile Memories

    Get PDF
    The evolution of data storage technologies has been extraordinary. Hard disk drives that fit in current personal computers have the capacity that requires tons of transistors to achieve in 1970s. Today, we are at the beginning of the era of non-volatile memory (NVM). NVMs provide excellent performance such as random access, high I/O speed, low power consumption, and so on. The storage density of NVMs keeps increasing following Moore’s law. However, higher storage density also brings significant data reliability issues. When chip geometries scale down, memory cells (e.g. transistors) are aligned much closer to each other, and noise in the devices will become no longer negligible. Consequently, data will be more prone to errors and devices will have much shorter longevity. This dissertation focuses on mitigating the reliability and the endurance issues for two major NVMs, namely, NAND flash memory and phase-change memory (PCM). Our main research tools include a set of coding techniques for the communication channels implied by flash memory and PCM. To approach the problems, at bit level we design error correcting codes tailored for the asymmetric errors in flash and PCM, we propose joint coding scheme for endurance and reliability, error scrubbing methods for controlling storage channel quality, and study codes that are inherently resisting to typical errors in flash and PCM; at higher levels, we are interested in analyzing the structures and the meanings of the stored data, and propose methods that pass such metadata to help further improve the coding performance at bit level. The highlights of this dissertation include the first set of write-once memory code constructions which correct a significant number of errors, a practical framework which corrects errors utilizing the redundancies in texts, the first report of the performance of polar codes for flash memories, and the emulation of rank modulation codes in NAND flash chips

    Storage Techniques in Flash Memories and Phase-change Memories

    Get PDF
    Non-volatile memories are an emerging storage technology with wide applica- tions in many important areas. This study focuses on new storage techniques for flash memories and phase-change memories. Flash memories are currently the most widely used type of non-volatile memory, and phase-change memories (PCMs) are the most promising candidate for the next-generation non-volatile memories. Like magnetic recording and optical recording, flash memories and PCMs have their own distinct properties, which introduce very interesting data storage problems. They include error correction, cell programming and other coding problems that affect the reliability and efficiency of data storage. Solutions to these problems can signifi- cantly improve the longevity and performance of the storage systems based on flash memories and PCMs. In this work, we study several new techniques for data storage in flash memories and PCMs. First, we study new types of error-correcting codes for flash memories – called error scrubbing codes –that correct errors by only increasing cell levels. Error scrubbing codes can correct errors without the costly block erasure operations, and we show how they can outperform conventional error-correcting codes. Next, we study the programming strategies for flash memory cells, and present an adaptive algorithm that optimizes the expected precision of cell programming. We then study data storage in PCMs, where thermal interference is a major challenge for data reliability. We present two new coding techniques that reduce thermal interference, and study their storage capacities and code constructions

    Bit-Flip Aware Data Structures for Phase Change Memory

    Get PDF
    Big, non-volatile, byte-addressable, low-cost, and fast non-volatile memories like Phase Change Memory are appearing in the marketplace. They have the capability to unify both memory and storage and allow us to rethink the present memory hierarchy. An important draw-back to Phase Change Memory is limited write-endurance. In addition, Phase Change Memory shares with other Non-Volatile Random Access Memories an asym- metry in the energy costs of writes and reads. Best use of Non-Volatile Random Access Memories limits the number of times a Non-Volatile Random Access Memory cell changes contents, called a bit-flip. While the future of main memory is still unknown, we should already start to create data structures for them in order to shape the future era. This thesis investigates the creation of bit-flip aware data structures.The thesis first considers general ways in which a data structure can save bit- flips by smart overwrites and by using the exclusive-or of pointers. It then shows how a simple content dependent encoding can reduce bit-flips for web corpora. It then shows how to build hash based dictionary structures for Linear Hashing and Spiral Storage. Finally, the thesis presents Gray counters, close to bit-flip optimal counters that even enable age- based wear leveling with counters managed by the Non-Volatile Random Access Memories themselves instead of by the Operating Systems

    Low Energy Solutions for Multi- and Triple-Level Cell Non-Volatile Memories

    Get PDF
    Due to the high refresh power and scalability issues of DRAM, non-volatile memories (NVM) such as phase change memory (PCM) and resistive RAM (RRAM) are being actively investigated as viable replacements of DRAM. However, although these NVMs are more scalable than DRAM, they have shortcomings such as higher write energy and lower endurance. Further, the increased capacity of multi- and triple-level cells (MLC/TLC) in these NVM technologies comes at the cost of even higher write energies and lower endurance attributed to the MLC/TLC program-and-verify (P&V) techniques. This dissertation makes the following contributions to address the high write energy associated with MLC/TLC NVMs. First, we describe MFNW, a Flip-N-Write encoding that effectively reduces the write energy and improves the endurance of MLC NVMs. MFNW encodes an MLC/TLC word into a number of codewords and selects the one resulting in lowest write energy. Second, we present another encoding solution that is based on perfect knowledge frequent value encoding (FVE). This encoding technique leverages machine learning to cluster a set of general-purpose applications according to their frequency profiles and generates a dedicated offline FVE for every cluster to maximize energy reduction across a broad spectrum of applications. Whereas the proposed encodings are used as an add-on layer on top of the MLC/TLC P&V solutions, the third contribution is a low latency, low energy P&V (L3EP) approach for MLC/TLC PCM. The primary motivation of L^3EP is to fix the problem from its origin by crafting a higher speed programming algorithm. A reduction in write latency implies a reduction in write energy as well as an improvement in cell endurance. Directions for future research include the integration and evaluation of a software-based hybrid encoding mechanism for MLC/TLC NVMs; this is a page-level encoding that employs a DRAM cache for coding/decoding purposes. The main challenges include how the cache block replacement algorithm can easily access the page-level auxiliary cells to encode the cache block correctly. In summary, this work presents multiple solutions to address major challenges of MLC/TLC NVMs, including write latency, write energy, and cell endurance

    Design and Code Optimization for Systems with Next-generation Racetrack Memories

    Get PDF
    With the rise of computationally expensive application domains such as machine learning, genomics, and fluids simulation, the quest for performance and energy-efficient computing has gained unprecedented momentum. The significant increase in computing and memory devices in modern systems has resulted in an unsustainable surge in energy consumption, a substantial portion of which is attributed to the memory system. The scaling of conventional memory technologies and their suitability for the next-generation system is also questionable. This has led to the emergence and rise of nonvolatile memory ( NVM ) technologies. Today, in different development stages, several NVM technologies are competing for their rapid access to the market. Racetrack memory ( RTM ) is one such nonvolatile memory technology that promises SRAM -comparable latency, reduced energy consumption, and unprecedented density compared to other technologies. However, racetrack memory ( RTM ) is sequential in nature, i.e., data in an RTM cell needs to be shifted to an access port before it can be accessed. These shift operations incur performance and energy penalties. An ideal RTM , requiring at most one shift per access, can easily outperform SRAM . However, in the worst-cast shifting scenario, RTM can be an order of magnitude slower than SRAM . This thesis presents an overview of the RTM device physics, its evolution, strengths and challenges, and its application in the memory subsystem. We develop tools that allow the programmability and modeling of RTM -based systems. For shifts minimization, we propose a set of techniques including optimal, near-optimal, and evolutionary algorithms for efficient scalar and instruction placement in RTMs . For array accesses, we explore schedule and layout transformations that eliminate the longer overhead shifts in RTMs . We present an automatic compilation framework that analyzes static control flow programs and transforms the loop traversal order and memory layout to maximize accesses to consecutive RTM locations and minimize shifts. We develop a simulation framework called RTSim that models various RTM parameters and enables accurate architectural level simulation. Finally, to demonstrate the RTM potential in non-Von-Neumann in-memory computing paradigms, we exploit its device attributes to implement logic and arithmetic operations. As a concrete use-case, we implement an entire hyperdimensional computing framework in RTM to accelerate the language recognition problem. Our evaluation shows considerable performance and energy improvements compared to conventional Von-Neumann models and state-of-the-art accelerators
    corecore