917 research outputs found

    When Do WOM Codes Improve the Erasure Factor in Flash Memories?

    Full text link
    Flash memory is a write-once medium in which reprogramming cells requires first erasing the block that contains them. The lifetime of the flash is a function of the number of block erasures and can be as small as several thousands. To reduce the number of block erasures, pages, which are the smallest write unit, are rewritten out-of-place in the memory. A Write-once memory (WOM) code is a coding scheme which enables to write multiple times to the block before an erasure. However, these codes come with significant rate loss. For example, the rate for writing twice (with the same rate) is at most 0.77. In this paper, we study WOM codes and their tradeoff between rate loss and reduction in the number of block erasures, when pages are written uniformly at random. First, we introduce a new measure, called erasure factor, that reflects both the number of block erasures and the amount of data that can be written on each block. A key point in our analysis is that this tradeoff depends upon the specific implementation of WOM codes in the memory. We consider two systems that use WOM codes; a conventional scheme that was commonly used, and a new recent design that preserves the overall storage capacity. While the first system can improve the erasure factor only when the storage rate is at most 0.6442, we show that the second scheme always improves this figure of merit.Comment: to be presented at ISIT 201

    Time-Space Constrained Codes for Phase-Change Memories

    Get PDF
    Phase-change memory (PCM) is a promising non-volatile solid-state memory technology. A PCM cell stores data by using its amorphous and crystalline states. The cell changes between these two states using high temperature. However, since the cells are sensitive to high temperature, it is important, when programming cells, to balance the heat both in time and space. In this paper, we study the time-space constraint for PCM, which was originally proposed by Jiang et al. A code is called an \emph{(α,β,p)(\alpha,\beta,p)-constrained code} if for any α\alpha consecutive rewrites and for any segment of β\beta contiguous cells, the total rewrite cost of the β\beta cells over those α\alpha rewrites is at most pp. Here, the cells are binary and the rewrite cost is defined to be the Hamming distance between the current and next memory states. First, we show a general upper bound on the achievable rate of these codes which extends the results of Jiang et al. Then, we generalize their construction for (α1,β=1,p=1)(\alpha\geq 1, \beta=1,p=1)-constrained codes and show another construction for (α=1,β1,p1)(\alpha = 1, \beta\geq 1,p\geq1)-constrained codes. Finally, we show that these two constructions can be used to construct codes for all values of α\alpha, β\beta, and pp

    Trajectory Codes for Flash Memory

    Get PDF
    Flash memory is well-known for its inherent asymmetry: the flash-cell charge levels are easy to increase but are hard to decrease. In a general rewriting model, the stored data changes its value with certain patterns. The patterns of data updates are determined by the data structure and the application, and are independent of the constraints imposed by the storage medium. Thus, an appropriate coding scheme is needed so that the data changes can be updated and stored efficiently under the storage-medium's constraints. In this paper, we define the general rewriting problem using a graph model. It extends many known rewriting models such as floating codes, WOM codes, buffer codes, etc. We present a new rewriting scheme for flash memories, called the trajectory code, for rewriting the stored data as many times as possible without block erasures. We prove that the trajectory code is asymptotically optimal in a wide range of scenarios. We also present randomized rewriting codes optimized for expected performance (given arbitrary rewriting sequences). Our rewriting codes are shown to be asymptotically optimal.Comment: Submitted to IEEE Trans. on Inform. Theor

    Coding scheme for 3D vertical flash memory

    Full text link
    Recently introduced 3D vertical flash memory is expected to be a disruptive technology since it overcomes scaling challenges of conventional 2D planar flash memory by stacking up cells in the vertical direction. However, 3D vertical flash memory suffers from a new problem known as fast detrapping, which is a rapid charge loss problem. In this paper, we propose a scheme to compensate the effect of fast detrapping by intentional inter-cell interference (ICI). In order to properly control the intentional ICI, our scheme relies on a coding technique that incorporates the side information of fast detrapping during the encoding stage. This technique is closely connected to the well-known problem of coding in a memory with defective cells. Numerical results show that the proposed scheme can effectively address the problem of fast detrapping.Comment: 7 pages, 9 figures. accepted to ICC 2015. arXiv admin note: text overlap with arXiv:1410.177

    Signal Processing for Caching Networks and Non-volatile Memories

    Get PDF
    The recent information explosion has created a pressing need for faster and more reliable data storage and transmission schemes. This thesis focuses on two systems: caching networks and non-volatile storage systems. It proposes network protocols to improve the efficiency of information delivery and signal processing schemes to reduce errors at the physical layer as well. This thesis first investigates caching and delivery strategies for content delivery networks. Caching has been investigated as a useful technique to reduce the network burden by prefetching some contents during o˙-peak hours. Coded caching [1] proposed by Maddah-Ali and Niesen is the foundation of our algorithms and it has been shown to be a useful technique which can reduce peak traffic rates by encoding transmissions so that different users can extract different information from the same packet. Content delivery networks store information distributed across multiple servers, so as to balance the load and avoid unrecoverable losses in case of node or disk failures. On one hand, distributed storage limits the capability of combining content from different servers into a single message, causing performance losses in coded caching schemes. But, on the other hand, the inherent redundancy existing in distributed storage systems can be used to improve the performance of those schemes through parallelism. This thesis proposes a scheme combining distributed storage of the content in multiple servers and an efficient coded caching algorithm for delivery to the users. This scheme is shown to reduce the peak transmission rate below that of state-of-the-art algorithms
    corecore