10,602 research outputs found
Exploration and Analysis of Combinations of Hamming Codes in 32-bit Memories
Reducing the threshold voltage of electronic devices increases their
sensitivity to electromagnetic radiation dramatically, increasing the
probability of changing the memory cells' content. Designers mitigate failures
using techniques such as Error Correction Codes (ECCs) to maintain information
integrity. Although there are several studies of ECC usage in spatial
application memories, there is still no consensus in choosing the type of ECC
as well as its organization in memory. This work analyzes some configurations
of the Hamming codes applied to 32-bit memories in order to use these memories
in spatial applications. This work proposes the use of three types of Hamming
codes: Ham(31,26), Ham(15,11), and Ham(7,4), as well as combinations of these
codes. We employed 36 error patterns, ranging from one to four bit-flips, to
analyze these codes. The experimental results show that the Ham(31,26)
configuration, containing five bits of redundancy, obtained the highest rate of
simple error correction, almost 97\%, with double, triple, and quadruple error
correction rates being 78.7\%, 63.4\%, and 31.4\%, respectively. While an ECC
configuration encompassed four Ham(7.4), which uses twelve bits of redundancy,
only fixes 87.5\% of simple errors
A Novel Encoding Scheme for Cross-Talk Effect Minimization Using Error Detecting and Correcting Codes
Abstract-In this paper a new bus encoding method presented for reducing crosstalk effects, which can avoid crosstalk and provide error-correcting as well. This method find a subset from cross talk avoidance code (CAC) to provide error correction which allows to reduce the crosstalk-induced delay with buses implementing an error detecting/correcting code. Here we propose Fibonacci representation of single error correcting codes using Hamming code to avoid crosstalk induced delay. Extra wires for checking bus are never required in the proposed method and it can also improve bus performance and reduce power dissipation. We give algorithms for obtaining optimal encodings and present a particular class of error free codes. Conversely other bus encoding techniques have been used to prevent crosstalk but don't correct error
Simple Rate-1/3 Convolutional and Tail-Biting Quantum Error-Correcting Codes
Simple rate-1/3 single-error-correcting unrestricted and CSS-type quantum
convolutional codes are constructed from classical self-orthogonal
\F_4-linear and \F_2-linear convolutional codes, respectively. These
quantum convolutional codes have higher rate than comparable quantum block
codes or previous quantum convolutional codes, and are simple to decode. A
block single-error-correcting [9, 3, 3] tail-biting code is derived from the
unrestricted convolutional code, and similarly a [15, 5, 3] CSS-type block code
from the CSS-type convolutional code.Comment: 5 pages; to appear in Proceedings of 2005 IEEE International
Symposium on Information Theor
Simple Quantum Error Correcting Codes
Methods of finding good quantum error correcting codes are discussed, and
many example codes are presented. The recipe C_2^{\perp} \subseteq C_1, where
C_1 and C_2 are classical codes, is used to obtain codes for up to 16
information qubits with correction of small numbers of errors. The results are
tabulated. More efficient codes are obtained by allowing C_1 to have reduced
distance, and introducing sign changes among the code words in a systematic
manner. This systematic approach leads to single-error correcting codes for 3,
4 and 5 information qubits with block lengths of 8, 10 and 11 qubits
respectively.Comment: Submitted to Phys. Rev. A. in May 1996. 21 pages, no figures. Further
information at http://eve.physics.ox.ac.uk/ASGhome.htm
Error control coding for semiconductor memories
All modern computers have memories built from VLSI RAM chips.
Individually, these devices are highly reliable and any single chip
may perform for decades before failing. However, when many of the
chips are combined in a single memory, the time that at least one
of them fails could decrease to mere few hours. The presence of
the failed chips causes errors when binary data are stored in and
read out from the memory. As a consequence the reliability of the
computer memories degrade. These errors are classified into hard
errors and soft errors. These can also be termed as permanent and
temporary errors respectively.
In some situations errors may show up as random errors, in
which both 1-to-O errors and 0-to-l errors occur randomly in a
memory word. In other situations the most likely errors are
unidirectional errors in which 1-to-O errors or 0-to-l errors may
occur but not both of them in one particular memory word.
To achieve a high speed and highly reliable computer, we need
large capacity memory. Unfortunately, with high density of
semiconductor cells in memory, the error rate increases
dramatically. Especially, the VLSI RAMs suffer from soft errors
caused by alpha-particle radiation. Thus the reliability of
computer could become unacceptable without error reducing schemes.
In practice several schemes to reduce the effects of the memory
errors were commonly used. But most of them are valid only for hard errors. As an efficient and economical method, error control
coding can be used to overcome both hard and soft errors.
Therefore it is becoming a widely used scheme in computer industry
today.
In this thesis, we discuss error control coding for
semiconductor memories. The thesis consists of six chapters.
Chapter one is an introduction to error detecting and correcting
coding for computer memories. Firstly, semiconductor memories and
their problems are discussed. Then some schemes for error reduction
in computer memories are given and the advantages of using error
control coding over other schemes are presented.
In chapter two, after a brief review of memory organizations,
memory cells and their physical constructions and principle of
storing data are described. Then we analyze mechanisms of various
errors occurring in semiconductor memories so that, for different
errors different coding schemes could be selected.
Chapter three is devoted to the fundamental coding theory. In
this chapter background on encoding and decoding algorithms are
presented.
In chapter four, random error control codes are discussed.
Among them error detecting codes, single* error correcting/double
error detecting codes and multiple error correcting codes are
analyzed. By using examples, the decoding implementations for
parity codes, Hamming codes, modified Hamming codes and majority
logic codes are demonstrated. Also in this chapter it was shown
that by combining error control coding and other schemes, the reliability of the memory can be improved by many orders.
For unidirectional errors, we introduced unordered codes in
chapter five. Two types of the unordered codes are discussed. They
are systematic and nonsystematic unordered codes. Both of them are
very powerful for unidirectional error detection. As an example of
optimal nonsystematic unordered code, an efficient balanced code
are analyzed. Then as an example of systematic unordered codes
Berger codes are analyzed. Considering the fact that in practice
random errors still may occur in unidirectional error memories,
some recently developed t-random error correcting/all
unidirectional error detecting codes are introduced. Illustrative
examples are also included to facilitate the explanation.
Chapter six is the conclusions of the thesis.
The whole thesis is oriented to the applications of error
control coding for semiconductor memories. Most of the codes
discussed in the thesis are widely used in practice. Through the
thesis we attempt to provide a review of coding in computer
memories and emphasize the advantage of coding. It is obvious that
with the requirement of higher speed and higher capacity
semiconductor memories, error control coding will play even more
important role in the future
The Error-Pattern-Correcting Turbo Equalizer
The error-pattern correcting code (EPCC) is incorporated in the design of a
turbo equalizer (TE) with aim to correct dominant error events of the
inter-symbol interference (ISI) channel at the output of its matching Viterbi
detector. By targeting the low Hamming-weight interleaved errors of the outer
convolutional code, which are responsible for low Euclidean-weight errors in
the Viterbi trellis, the turbo equalizer with an error-pattern correcting code
(TE-EPCC) exhibits a much lower bit-error rate (BER) floor compared to the
conventional non-precoded TE, especially for high rate applications. A
maximum-likelihood upper bound is developed on the BER floor of the TE-EPCC for
a generalized two-tap ISI channel, in order to study TE-EPCC's signal-to-noise
ratio (SNR) gain for various channel conditions and design parameters. In
addition, the SNR gain of the TE-EPCC relative to an existing precoded TE is
compared to demonstrate the present TE's superiority for short interleaver
lengths and high coding rates.Comment: This work has been submitted to the special issue of the IEEE
Transactions on Information Theory titled: "Facets of Coding Theory: from
Algorithms to Networks". This work was supported in part by the NSF
Theoretical Foundation Grant 0728676
- …