35 research outputs found

    Constructions of Rank Modulation Codes

    Full text link
    Rank modulation is a way of encoding information to correct errors in flash memory devices as well as impulse noise in transmission lines. Modeling rank modulation involves construction of packings of the space of permutations equipped with the Kendall tau distance. We present several general constructions of codes in permutations that cover a broad range of code parameters. In particular, we show a number of ways in which conventional error-correcting codes can be modified to correct errors in the Kendall space. Codes that we construct afford simple encoding and decoding algorithms of essentially the same complexity as required to correct errors in the Hamming metric. For instance, from binary BCH codes we obtain codes correcting tt Kendall errors in nn memory cells that support the order of n!/(log⁥2n!)tn!/(\log_2n!)^t messages, for any constant t=1,2,...t= 1,2,... We also construct families of codes that correct a number of errors that grows with nn at varying rates, from Θ(n)\Theta(n) to Θ(n2)\Theta(n^{2}). One of our constructions gives rise to a family of rank modulation codes for which the trade-off between the number of messages and the number of correctable Kendall errors approaches the optimal scaling rate. Finally, we list a number of possibilities for constructing codes of finite length, and give examples of rank modulation codes with specific parameters.Comment: Submitted to IEEE Transactions on Information Theor

    Balanced Modulation for Nonvolatile Memories

    Get PDF
    This paper presents a practical writing/reading scheme in nonvolatile memories, called balanced modulation, for minimizing the asymmetric component of errors. The main idea is to encode data using a balanced error-correcting code. When reading information from a block, it adjusts the reading threshold such that the resulting word is also balanced or approximately balanced. Balanced modulation has suboptimal performance for any cell-level distribution and it can be easily implemented in the current systems of nonvolatile memories. Furthermore, we studied the construction of balanced error-correcting codes, in particular, balanced LDPC codes. It has very efficient encoding and decoding algorithms, and it is more efficient than prior construction of balanced error-correcting codes

    Data Representation for Efficient and Reliable Storage in Flash Memories

    Get PDF
    Recent years have witnessed a proliferation of flash memories as an emerging storage technology with wide applications in many important areas. Like magnetic recording and optimal recording, flash memories have their own distinct properties and usage environment, which introduce very interesting new challenges for data storage. They include accurate programming without overshooting, error correction, reliable writing data to flash memories under low-voltages and file recovery for flash memories. Solutions to these problems can significantly improve the longevity and performance of the storage systems based on flash memories. In this work, we explore several new data representation techniques for efficient and reliable data storage in flash memories. First, we present a new data representation scheme—rank modulation with multiplicity —to eliminate the overshooting and charge leakage problems for flash memories. Next, we study the Half-Wits — stochastic behavior of writing data to embedded flash memories at voltages lower than recommended by a microcontroller’s specifications—and propose three software- only algorithms that enable reliable storage at low voltages without modifying hard- ware, which can reduce energy consumption by 30%. Then, we address the file erasures recovery problem in flash memories. Instead of only using traditional error- correcting codes, we design a new content-assisted decoder (CAD) to recover text files. The new CAD can be combined with the existing error-correcting codes and the experiment results show CAD outperforms the traditional error-correcting codes

    Algorithms and Data Representations for Emerging Non-Volatile Memories

    Get PDF
    The evolution of data storage technologies has been extraordinary. Hard disk drives that fit in current personal computers have the capacity that requires tons of transistors to achieve in 1970s. Today, we are at the beginning of the era of non-volatile memory (NVM). NVMs provide excellent performance such as random access, high I/O speed, low power consumption, and so on. The storage density of NVMs keeps increasing following Moore’s law. However, higher storage density also brings significant data reliability issues. When chip geometries scale down, memory cells (e.g. transistors) are aligned much closer to each other, and noise in the devices will become no longer negligible. Consequently, data will be more prone to errors and devices will have much shorter longevity. This dissertation focuses on mitigating the reliability and the endurance issues for two major NVMs, namely, NAND flash memory and phase-change memory (PCM). Our main research tools include a set of coding techniques for the communication channels implied by flash memory and PCM. To approach the problems, at bit level we design error correcting codes tailored for the asymmetric errors in flash and PCM, we propose joint coding scheme for endurance and reliability, error scrubbing methods for controlling storage channel quality, and study codes that are inherently resisting to typical errors in flash and PCM; at higher levels, we are interested in analyzing the structures and the meanings of the stored data, and propose methods that pass such metadata to help further improve the coding performance at bit level. The highlights of this dissertation include the first set of write-once memory code constructions which correct a significant number of errors, a practical framework which corrects errors utilizing the redundancies in texts, the first report of the performance of polar codes for flash memories, and the emulation of rank modulation codes in NAND flash chips

    Low-Complexity LP Decoding of Nonbinary Linear Codes

    Full text link
    Linear Programming (LP) decoding of Low-Density Parity-Check (LDPC) codes has attracted much attention in the research community in the past few years. LP decoding has been derived for binary and nonbinary linear codes. However, the most important problem with LP decoding for both binary and nonbinary linear codes is that the complexity of standard LP solvers such as the simplex algorithm remains prohibitively large for codes of moderate to large block length. To address this problem, two low-complexity LP (LCLP) decoding algorithms for binary linear codes have been proposed by Vontobel and Koetter, henceforth called the basic LCLP decoding algorithm and the subgradient LCLP decoding algorithm. In this paper, we generalize these LCLP decoding algorithms to nonbinary linear codes. The computational complexity per iteration of the proposed nonbinary LCLP decoding algorithms scales linearly with the block length of the code. A modified BCJR algorithm for efficient check-node calculations in the nonbinary basic LCLP decoding algorithm is also proposed, which has complexity linear in the check node degree. Several simulation results are presented for nonbinary LDPC codes defined over Z_4, GF(4), and GF(8) using quaternary phase-shift keying and 8-phase-shift keying, respectively, over the AWGN channel. It is shown that for some group-structured LDPC codes, the error-correcting performance of the nonbinary LCLP decoding algorithms is similar to or better than that of the min-sum decoding algorithm.Comment: To appear in IEEE Transactions on Communications, 201

    Towards Endurable, Reliable and Secure Flash Memories-a Coding Theory Application

    Get PDF
    Storage systems are experiencing a historical paradigm shift from hard disk to nonvolatile memories due to its advantages such as higher density, smaller size and non-volatility. On the other hand, Solid Storage Disk (SSD) also poses critical challenges to application and system designers. The first challenge is called endurance. Endurance means flash memory can only experience a limited number of program/erase cycles, and after that the cell quality degradation can no longer be accommodated by the memory system fault tolerance capacity. The second challenge is called reliability, which means flash cells are sensitive to various noise and disturbs, i.e., data may change unintentionally after experiencing noise/disturbs. The third challenge is called security, which means it is impossible or costly to delete files from flash memory securely without leaking information to possible eavesdroppers. In this dissertation, we first study noise modeling and capacity analysis for NAND flash memories (which is the most popular flash memory in market), which gains us some insight on how flash memories are working and their unique noise. Second, based on the characteristics of content-replication codewords in flash memories, we propose a joint decoder to enhance the flash memory reliability. Third, we explore data representation schemes in flash memories and optimal rewriting code constructions in order to solve the endurance problem. Fourth, in order to make our rewriting code more practical, we study noisy write-efficient memories and Write-Once Memory (WOM) codes against inter-cell interference in NAND memories. Finally, motivated by the secure deletion problem in flash memories, we study coding schemes to solve both the endurance and the security issues in flash memories. This work presents a series of information theory and coding theory research studies on the aforesaid three critical issues, and shows that how coding theory can be utilized to address these challenges

    Error Correcting Coding for a Non-symmetric Ternary Channel

    Full text link
    Ternary channels can be used to model the behavior of some memory devices, where information is stored in three different levels. In this paper, error correcting coding for a ternary channel where some of the error transitions are not allowed, is considered. The resulting channel is non-symmetric, therefore classical linear codes are not optimal for this channel. We define the maximum-likelihood (ML) decoding rule for ternary codes over this channel and show that it is complex to compute, since it depends on the channel error probability. A simpler alternative decoding rule which depends only on code properties, called \da-decoding, is then proposed. It is shown that \da-decoding and ML decoding are equivalent, i.e., \da-decoding is optimal, under certain conditions. Assuming \da-decoding, we characterize the error correcting capabilities of ternary codes over the non-symmetric ternary channel. We also derive an upper bound and a constructive lower bound on the size of codes, given the code length and the minimum distance. The results arising from the constructive lower bound are then compared, for short sizes, to optimal codes (in terms of code size) found by a clique-based search. It is shown that the proposed construction method gives good codes, and that in some cases the codes are optimal.Comment: Submitted to IEEE Transactions on Information Theory. Part of this work was presented at the Information Theory and Applications Workshop 200
    corecore