9 research outputs found
Phased burst error-correcting array codes
Various aspects of single-phased burst-error-correcting array codes are explored. These codes are composed of two-dimensional arrays with row and column parities with a diagonally cyclic readout order; they are capable of correcting a single burst error along one diagonal. Optimal codeword sizes are found to have dimensions n1×n2 such that n2 is the smallest prime number larger than n1. These codes are capable of reaching the Singleton bound. A new type of error, approximate errors, is defined; in q-ary applications, these errors cause data to be slightly corrupted and therefore still close to the true data level. Phased burst array codes can be tailored to correct these codes with even higher rates than befor
On-Chip ECC for Multi-Level Random Access Memories
In this talk we investigate a number of on-chip coding techniques for the protection of Random Access Memories which use multi-level as opposed to binary storage cells. The motivation for such RAM cells is of course the storage
of several bits per cell as opposed to one bit per cell [l].
Since the typical number of levels which a multi-level RAM can handle is 16 (the cell being based on a standard DRAM
cell which has varying amounts of voltage stored on
it) there are four bits recorded into each cell [2]. The disadvantage of multi-level RAMs is that they are much
more prone to errors, and so on-chip ECC is essential for reliable operation. There are essentially three reasons for error control coding in multi-level RAMs: To
correct soft errors, to correct hard errors, and to
correct read errors. The source of these errors is,
respectively, alpha particle radiation, hardware faults, and
data level ambiguities. On-chip error correction can be
used to increase the mean life before failure for all three types of errors. Coding schemes can be both bitwise and
cellwise. Bitwise schemes include simple parity checks and SEC-DED codes, either by themselves or as product codes
[3]. Data organization should allow for burst error correction, since alpha particles can wipe out all
four bits in a single cell, and for dense memory chips,
data in surrounding cells as well. This latter effect becomes more serious as feature sizes are scaled, and
a single alpha particle hit affects many adjacent cells. Burst codes such as those in [4] can be used to correct for
these errors. Bitwise coding schemes are more efficient
in correcting read errors, since they can correct single bit
errors and allow the remaining error correction power to be
used elsewhere. Read errors essentially affect one bit
only, since the use of Grey codes for encoding the bits
into the memory cells ensures that at most one bit is flipped with each successive change in level. Cellwise schemes include Reed-Solomon codes, hexadecimal
codes, and product codes. However, simple encoding and decoding algorithms are necessary, since excessive space taken by powerful but complex encoding/decoding circuits can
be offset by having more parity cells and using simpler
codes. These coding techniques are more useful for correcting hard and soft errors which affect the entire cell. They tend to be more complex, and they are not as
efficient in correcting read errors as the bitwise codes.
In the talk we will investigate the suitability and
performance of various multi-level RAM coding schemes,
such as row-column codes, burst codes, hexadecimal codes, Reed-Solomon codes, concatenated codes, and some new majority-logic decodable codes. In particular we investigate their tolerance to soft errors, and to feature size scaling
A static RAM chip with on-chip error correction
This paper describes a 2-kb CMOS static RAM with on-chip error-correction capability (ECCRAM chip). The chip employs the linear sum code (LSC) technique to perform error detection and correction. The ECCRAM chip has been fabricated in a double-metal scalable CMOS process with 3-µm feature size. Testing results of the actual chip shows a significant improvement in random error tolerance
Single Phased Burst Error Correcting Array Codes
Array codes composed of row and column parities with a diagonally cyclic readout order are capable of correcting
a single burst error along one diagonal. A new equation which defines permissible array sizes is presented. These codes have an optimal size which is shown to be a number theoretic problem. In addition, correction of approximate errors is presented; this can be generalized for many classes of error correcting codes
Locally Adaptive Vector Quantization For Image Compression
In this paper we study various improvements to a locally
adaptive vector quantization (LAVQ) algorithm. The
effects of including bit stripping, index compression, and filtering techniques will be discussed. Software implementation and comparisons with non-adaptive vector
quantization algorithms will be studied
A static RAM chip with on-chip error correction
This paper describes a 2-kb CMOS static RAM with on-chip error-correction capability (ECCRAM chip). The chip employs the linear sum code (LSC) technique to perform error detection and correction. The ECCRAM chip has been fabricated in a double-metal scalable CMOS process with 3-µm feature size. Testing results of the actual chip shows a significant improvement in random error tolerance
Analyses of coding and compression strategies for data storage and transmission
Selected topics in error correction coding and data compression for data storage and transmission will be analyzed here. In particular, a model for the mean time to failure for computer memories protected by error correction coding, characteristics and applications of phased burst error correcting array codes, and locally adaptive vector quantization for image and data compression will be examined.
A model of the mean time to failure (MTTF) of semiconductor random access memories protected by single error correcting-double error detecting (SEC-DED) codes on the chip and with soft error scrubbing and multiple types of hard failures will be presented. Only a few assumptions and approximations will be made. This model will provide a more complete picture of the expected failure modes, reliability, and the mean time to failure of memory systems protected by on-chip error correction coding. Special cases will also be addressed, such as slow or fast scrubbing and dominance of hard or soft errors.
Characteristics of a family of phased burst error correcting array codes will be addressed. In particular, allowable and optimal code sizes will be examined. When used in non-binary applications, these codes retain their characteristics and can correct "approximate" errors with even higher rate: If the amount any q-ary symbol can be in error is bounded by some value, these codes can be designed to address this type of error with even fewer check symbols.
Improvements to a locally adaptive vector quantization compression strategy will be discussed. The basic strategy involves reorganization of the code book after each use so that the most recent codewords are moved to the front. With the various improvements covered in this work, the algorithm is capable of matching the performance of other more computationally intensive algorithms at a fraction of the computational complexity