111 research outputs found
Unordered Error-Correcting Codes and their Applications
We give efficient constructions for error correcting
unordered {ECU) codes, i.e., codes such that any
pair of codewords are at a certain minimal distance
apart and at the same time they are unordered. These
codes are used for detecting a predetermined number
of (symmetric) errors and for detecting all unidirectional
errors. We also give an application in parallel
asynchronous communications
Delay-insensitive pipelined communication on parallel buses
Consider a communication channel that consists of several subchannels transmitting simultaneously and asynchronously. As an example of this scheme, we can consider a board with several chips. The subchannels represent wires connecting between the chips where differences in the lengths of the wires might result in asynchronous reception. In current technology, the receiver acknowledges reception of the message before the transmitter sends the following message. Namely, pipelined utilization of the channel is not possible. Our main contribution is a scheme that enables transmission without an acknowledgment of the message, therefore enabling pipelined communication and providing a higher bandwidth. However, our scheme allows for a certain number of transitions from a second message to arrive before reception of the current message has been completed, a condition that we call skew. We have derived necessary and sufficient conditions for codes that can tolerate a certain amount of skew among adjacent messages (therefore, allowing for continuous operation) and detect a larger amount of skew when the original skew is exceeded. These results generalize previously known results. We have constructed codes that satisfy the necessary and sufficient conditions, studied their optimality, and devised efficient decoding algorithms. To the best of our knowledge, this is the first known scheme that permits efficient asynchronous communications without acknowledgment. Potential applications are in on-chip, on-board, and board to board communications, enabling much higher communication bandwidth
A Computational Framework for Efficient Error Correcting Codes Using an Artificial Neural Network Paradigm.
The quest for an efficient computational approach to neural connectivity problems has undergone a significant evolution in the last few years. The current best systems are far from equaling human performance, especially when a program of instructions is executed sequentially as in a von Neuman computer. On the other hand, neural net models are potential candidates for parallel processing since they explore many competing hypotheses simultaneously using massively parallel nets composed of many computational elements connected by links with variable weights. Thus, the application of modeling of a neural network must be complemented by deep insight into how to embed algorithms for an error correcting paradigm in order to gain the advantage of parallel computation. In this dissertation, we construct a neural network for single error detection and correction in linear codes. Then we present an error-detecting paradigm in the framework of neural networks. We consider the problem of error detection of systematic unidirectional codes which is assumed to have double or triple errors. The generalization of network construction for the error-detecting codes is discussed with a heuristic algorithm. We also describe models of the code construction, detection and correction of t-EC/d-ED/AUED (t-Error Correcting/d-Error Detecting/All Unidirectional Error Detecting) codes which are more general codes in the error correcting paradigm
Recommended from our members
Unidirectional error correcting/detecting codes
An extensive theory of symmetric error control coding has been developed in the last few decades. The recently developed VLSI circuits, ROM, and RAM memories have given an impetus to the extension of error control coding to include asymmetric and unidirectional types of error control. The maximal numbers of unidirectional errors which can be detected by systematic codes using r checkbits are investigated. They are found for codes with k, the number of information bits, being equal to 2[superscript r] and 2[superscript r] + 1. The importance of their characteristic in unidirectional error detection is discussed. A new method of constructing a systematic t-error correcting/all-unidirectional error detecting(t-EC/AUED) code, which uses fewer checkbits than any of the previous methods, is developed. It is constructed by appending t + 1 check symbols to a systematic t-error correcting and (t+l)-error detecting code. Its decoding algorithm is developed. A bound on the number of checkbits for a systematic t-EC/AUED code is also discussed. Bose-Rao codes, which are the best known single error correcting/all-unidirectional error detecting(SEC/AUED) codes, are completely analyzed. The maximal Bose-Rao codes for a fixed weight and for all weights are found. Of course, the base group and the group element which make the Bose-Rao code maximal are found, too. The bounds on the size of SEC/AUED codes are discussed. Nonsystematic single error correcting/d-unidirectional error detecting codes are constructed. Three methods for constructing the systematic t-error correcting/d-unidirectional error detecting(t-EC/d-UED) codes are developed. From these, simple and efficient t-EC/(t+2)-UED codes are derived. The decoding algorithm for one of these methods, which can be applied to the other two methods with slight modification, is described. A lower bound on the number of checkbits for a systematic t-EC/d-UED code is derived. Finally, future research efforts are proposed
Error control coding for semiconductor memories
All modern computers have memories built from VLSI RAM chips.
Individually, these devices are highly reliable and any single chip
may perform for decades before failing. However, when many of the
chips are combined in a single memory, the time that at least one
of them fails could decrease to mere few hours. The presence of
the failed chips causes errors when binary data are stored in and
read out from the memory. As a consequence the reliability of the
computer memories degrade. These errors are classified into hard
errors and soft errors. These can also be termed as permanent and
temporary errors respectively.
In some situations errors may show up as random errors, in
which both 1-to-O errors and 0-to-l errors occur randomly in a
memory word. In other situations the most likely errors are
unidirectional errors in which 1-to-O errors or 0-to-l errors may
occur but not both of them in one particular memory word.
To achieve a high speed and highly reliable computer, we need
large capacity memory. Unfortunately, with high density of
semiconductor cells in memory, the error rate increases
dramatically. Especially, the VLSI RAMs suffer from soft errors
caused by alpha-particle radiation. Thus the reliability of
computer could become unacceptable without error reducing schemes.
In practice several schemes to reduce the effects of the memory
errors were commonly used. But most of them are valid only for hard errors. As an efficient and economical method, error control
coding can be used to overcome both hard and soft errors.
Therefore it is becoming a widely used scheme in computer industry
today.
In this thesis, we discuss error control coding for
semiconductor memories. The thesis consists of six chapters.
Chapter one is an introduction to error detecting and correcting
coding for computer memories. Firstly, semiconductor memories and
their problems are discussed. Then some schemes for error reduction
in computer memories are given and the advantages of using error
control coding over other schemes are presented.
In chapter two, after a brief review of memory organizations,
memory cells and their physical constructions and principle of
storing data are described. Then we analyze mechanisms of various
errors occurring in semiconductor memories so that, for different
errors different coding schemes could be selected.
Chapter three is devoted to the fundamental coding theory. In
this chapter background on encoding and decoding algorithms are
presented.
In chapter four, random error control codes are discussed.
Among them error detecting codes, single* error correcting/double
error detecting codes and multiple error correcting codes are
analyzed. By using examples, the decoding implementations for
parity codes, Hamming codes, modified Hamming codes and majority
logic codes are demonstrated. Also in this chapter it was shown
that by combining error control coding and other schemes, the reliability of the memory can be improved by many orders.
For unidirectional errors, we introduced unordered codes in
chapter five. Two types of the unordered codes are discussed. They
are systematic and nonsystematic unordered codes. Both of them are
very powerful for unidirectional error detection. As an example of
optimal nonsystematic unordered code, an efficient balanced code
are analyzed. Then as an example of systematic unordered codes
Berger codes are analyzed. Considering the fact that in practice
random errors still may occur in unidirectional error memories,
some recently developed t-random error correcting/all
unidirectional error detecting codes are introduced. Illustrative
examples are also included to facilitate the explanation.
Chapter six is the conclusions of the thesis.
The whole thesis is oriented to the applications of error
control coding for semiconductor memories. Most of the codes
discussed in the thesis are widely used in practice. Through the
thesis we attempt to provide a review of coding in computer
memories and emphasize the advantage of coding. It is obvious that
with the requirement of higher speed and higher capacity
semiconductor memories, error control coding will play even more
important role in the future
Jitter model and signal processing techniques for pulse width modulation optical recording
A jitter model and signal processing techniques are discussed for data recovery in Pulse Width Modulation (PWM) optical recording. In PWM, information is stored through modulating sizes of sequential marks alternating in magnetic polarization or in material structure. Jitter, defined as the deviation from the original mark size in the time domain, will result in error detection if it is excessively large. A new approach is taken in data recovery by first using a high speed counter clock to convert time marks to amplitude marks, and signal processing techniques are used to minimize jitter according to the jitter model. The signal processing techniques include motor speed and intersymbol interference equalization, differential and additive detection, and differential and additive modulation
Book announcements
Podeu consultar la versió en castellà a: http://hdl.handle.net/11703/10236
Annual reports 1991 town officers town of Freedom, New Hampshire for the fiscal year ending December 31, 1991, vital statistics for 1991.
This is an annual report containing vital statistics for a town/city in the state of New Hampshire
- …