2,468 research outputs found

    Error control for reliable digital data transmission and storage systems

    Get PDF
    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256K-bit DRAM's are organized in 32Kx8 bit-bytes. Byte oriented codes such as Reed Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. In this paper we present some special decoding techniques for extended single-and-double-error-correcting RS codes which are capable of high speed operation. These techniques are designed to find the error locations and the error values directly from the syndrome without having to use the iterative alorithm to find the error locator polynomial. Two codes are considered: (1) a d sub min = 4 single-byte-error-correcting (SBEC), double-byte-error-detecting (DBED) RS code; and (2) a d sub min = 6 double-byte-error-correcting (DBEC), triple-byte-error-detecting (TBED) RS code

    Fast decoding techniques for extended single-and-double-error-correcting Reed Solomon codes

    Get PDF
    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. For example, some 256K-bit dynamic random access memories are organized as 32K x 8 bit-bytes. Byte-oriented codes such as Reed Solomon (RS) codes provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special high speed decoding techniques for extended single and double error correcting RS codes. These techniques are designed to find the error locations and the error values directly from the syndrome without having to form the error locator polynomial and solve for its roots

    Reed Solomon codes for error control in byte organized computer memory systems

    Get PDF
    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256K-bit DRAM's are organized in 32Kx8 bit-bytes. Byte oriented codes such as Reed Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special decoding techniques for extended single-and-double-error-correcting RS codes which are capable of high speed operation are presented. These techniques are designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial

    Error control coding for semiconductor memories

    Get PDF
    All modern computers have memories built from VLSI RAM chips. Individually, these devices are highly reliable and any single chip may perform for decades before failing. However, when many of the chips are combined in a single memory, the time that at least one of them fails could decrease to mere few hours. The presence of the failed chips causes errors when binary data are stored in and read out from the memory. As a consequence the reliability of the computer memories degrade. These errors are classified into hard errors and soft errors. These can also be termed as permanent and temporary errors respectively. In some situations errors may show up as random errors, in which both 1-to-O errors and 0-to-l errors occur randomly in a memory word. In other situations the most likely errors are unidirectional errors in which 1-to-O errors or 0-to-l errors may occur but not both of them in one particular memory word. To achieve a high speed and highly reliable computer, we need large capacity memory. Unfortunately, with high density of semiconductor cells in memory, the error rate increases dramatically. Especially, the VLSI RAMs suffer from soft errors caused by alpha-particle radiation. Thus the reliability of computer could become unacceptable without error reducing schemes. In practice several schemes to reduce the effects of the memory errors were commonly used. But most of them are valid only for hard errors. As an efficient and economical method, error control coding can be used to overcome both hard and soft errors. Therefore it is becoming a widely used scheme in computer industry today. In this thesis, we discuss error control coding for semiconductor memories. The thesis consists of six chapters. Chapter one is an introduction to error detecting and correcting coding for computer memories. Firstly, semiconductor memories and their problems are discussed. Then some schemes for error reduction in computer memories are given and the advantages of using error control coding over other schemes are presented. In chapter two, after a brief review of memory organizations, memory cells and their physical constructions and principle of storing data are described. Then we analyze mechanisms of various errors occurring in semiconductor memories so that, for different errors different coding schemes could be selected. Chapter three is devoted to the fundamental coding theory. In this chapter background on encoding and decoding algorithms are presented. In chapter four, random error control codes are discussed. Among them error detecting codes, single* error correcting/double error detecting codes and multiple error correcting codes are analyzed. By using examples, the decoding implementations for parity codes, Hamming codes, modified Hamming codes and majority logic codes are demonstrated. Also in this chapter it was shown that by combining error control coding and other schemes, the reliability of the memory can be improved by many orders. For unidirectional errors, we introduced unordered codes in chapter five. Two types of the unordered codes are discussed. They are systematic and nonsystematic unordered codes. Both of them are very powerful for unidirectional error detection. As an example of optimal nonsystematic unordered code, an efficient balanced code are analyzed. Then as an example of systematic unordered codes Berger codes are analyzed. Considering the fact that in practice random errors still may occur in unidirectional error memories, some recently developed t-random error correcting/all unidirectional error detecting codes are introduced. Illustrative examples are also included to facilitate the explanation. Chapter six is the conclusions of the thesis. The whole thesis is oriented to the applications of error control coding for semiconductor memories. Most of the codes discussed in the thesis are widely used in practice. Through the thesis we attempt to provide a review of coding in computer memories and emphasize the advantage of coding. It is obvious that with the requirement of higher speed and higher capacity semiconductor memories, error control coding will play even more important role in the future

    Product Error-Correcting Codes That Span NAND-Flash Dies

    Get PDF
    NAND-flash memories include a matrix of pages. Each column of the matrix is located in one physical semiconductor die. Because errors are correlated such that they occur in groups within a die, error-correcting codes (ECC) are optimally constructed across dies (matrix rows), a principle known as cross-die design. A product ECC is a type of code that encodes rows to one parity and columns to another. Although the row-constituents of product codes are cross-die, the column-constituents are not so. This disclosure describes product ECCs where both constituent codes span semiconductor dies. The described cross-die product codes provide better performance for random errors while maintaining performance comparable to traditional codes for correlated errors, at nearly the same coding overhead. A page in error can be repaired in two unique ways, both of which are cross-die, thereby improving data reliability and speed of repair

    The reliability of single-error protected computer memories

    Get PDF
    The lifetimes of computer memories which are protected with single-error-correcting-double-error-detecting (SEC-DED) codes are studies. The authors assume that there are five possible types of memory chip failure (single-cell, row, column, row-column and whole chip), and, after making a simplifying assumption (the Poisson assumption), have substantiated that experimentally. A simple closed-form expression is derived for the system reliability function. Using this formula and chip reliability data taken from published tables, it is possible to compute the mean time to failure for realistic memory systems

    EDACs and test integration strategies for NAND flash memories

    Get PDF
    Mission-critical applications usually presents several critical issues: the required level of dependability of the whole mission always implies to address different and contrasting dimensions and to evaluate the tradeoffs among them. A mass-memory device is always needed in all mission-critical applications: NAND flash-memories could be used for this goal. Error Detection And Correction (EDAC) techniques are needed to improve dependability of flash-memory devices. However also testing strategies need to be explored in order to provide highly dependable systems. Integrating these two main aspects results in providing a fault-tolerant mass-memory device, but no systematic approach has so far been proposed to consider them as a whole. As a consequence a novel strategy integrating a particular code-based design environment with newly selected testing strategies is presented in this pape

    Testing Embedded Memories in Telecommunication Systems

    Get PDF
    Extensive system testing is mandatory nowadays to achieve high product quality. Telecommunication systems are particularly sensitive to such a requirement; to maintain market competitiveness, manufacturers need to combine reduced costs, shorter life cycles, advanced technologies, and high quality. Moreover, strict reliability constraints usually impose very low fault latencies and a high degree of fault detection for both permanent and transient faults. This article analyzes major problems related to testing complex telecommunication systems, with particular emphasis on their memory modules, often so critical from the reliability point of view. In particular, advanced BIST-based solutions are analyzed, and two significant industrial case studies presente
    corecore