7,752 research outputs found

    Correction of single error bursts beyond the code correction capability using information sets

    Get PDF
    The most important method of ensuring data integrity is correcting errors that occur during information storage, processing or transmission. The error-correcting coding methods are used to correct errors. In real systems, noise processes are correlated. However, traditional coding and decoding methods use decorrelation, and it is known that this procedure reduces the maximum achievable characteristics of coding. Thus, constructing computationally efficient decoding methods that would correct grouped errors for a wide class of codes is an actual problem. In this paper the decoding by information sets is used to correct single bursts. This method has exponential complexity when correcting independent errors. The proposed approach uses a number of information sets linearly growing with code length, which provides polynomial decoding complexity. A further reduction of the number of information sets is possible with the proposed method of using dense information sets. It allows evaluating both the set of errors potentially corrected by the code and the characteristics of the decoder. An improvement of the decoding method using an error vector counter is proposed, which allows in some cases to increase the number of corrected error vectors. This method allows significantly reducing the number of information sets or increasing the number of corrected error vectors according to the minimum burst length criterion. The proposed decoders allow correction of single error bursts in polynomial time for arbitrary linear codes. The results of experiments based on standard array show that decoders not only correct all errors within the burst correcting capability of the code, but also a significant number of error vectors beyond of it. Possible directions of further research are the analysis of the proposed decoding algorithms for long codes where the method of analysis based on the standard array is not applicable; the development and analysis of decoding methods for multiple bursts and the joint correction of grouped and random errors

    Phased burst error-correcting array codes

    Get PDF
    Various aspects of single-phased burst-error-correcting array codes are explored. These codes are composed of two-dimensional arrays with row and column parities with a diagonally cyclic readout order; they are capable of correcting a single burst error along one diagonal. Optimal codeword sizes are found to have dimensions n1×n2 such that n2 is the smallest prime number larger than n1. These codes are capable of reaching the Singleton bound. A new type of error, approximate errors, is defined; in q-ary applications, these errors cause data to be slightly corrupted and therefore still close to the true data level. Phased burst array codes can be tailored to correct these codes with even higher rates than befor

    Two-dimensional burst identification codes and their use in burst correction

    Get PDF
    A new class of codes, called burst identification codes, is defined and studied. These codes can be used to determine the patterns of burst errors. Two-dimensional burst correcting codes can be easily constructed from burst identification codes. The resulting class of codes is simple to implement and has lower redundancy than other comparable codes. The results are pertinent to the study of radiation effects on VLSI RAM chips, which can cause two-dimensional bursts of errors

    X-code: MDS array codes with optimal encoding

    Get PDF
    We present a new class of MDS (maximum distance separable) array codes of size n×n (n a prime number) called X-code. The X-codes are of minimum column distance 3, namely, they can correct either one column error or two column erasures. The key novelty in X-code is that it has a simple geometrical construction which achieves encoding/update optimal complexity, i.e., a change of any single information bit affects exactly two parity bits. The key idea in our constructions is that all parity symbols are placed in rows rather than columns

    Interleaving schemes for multidimensional cluster errors

    Get PDF
    We present two-dimensional and three-dimensional interleaving techniques for correcting two- and three-dimensional bursts (or clusters) of errors, where a cluster of errors is characterized by its area or volume. Correction of multidimensional error clusters is required in holographic storage, an emerging application of considerable importance. Our main contribution is the construction of efficient two-dimensional and three-dimensional interleaving schemes. The proposed schemes are based on t-interleaved arrays of integers, defined by the property that every connected component of area or volume t consists of distinct integers. In the two-dimensional case, our constructions are optimal: they have the lowest possible interleaving degree. That is, the resulting t-interleaved arrays contain the smallest possible number of distinct integers, hence minimizing the number of codewords required in an interleaving scheme. In general, we observe that the interleaving problem can be interpreted as a graph-coloring problem, and introduce the useful special class of lattice interleavers. We employ a result of Minkowski, dating back to 1904, to establish both upper and lower bounds on the interleaving degree of lattice interleavers in three dimensions. For the case t≡0 mod 6, the upper and lower bounds coincide, and the Minkowski lattice directly yields an optimal lattice interleaver. For t≠0 mod 6, we construct efficient lattice interleavers using approximations of the Minkowski lattice

    An investigation of error characteristics and coding performance

    Get PDF
    The performance of forward error correcting coding schemes on errors anticipated for the Earth Observation System (EOS) Ku-band downlink are studied. The EOS transmits picture frame data to the ground via the Telemetry Data Relay Satellite System (TDRSS) to a ground-based receiver at White Sands. Due to unintentional RF interference from other systems operating in the Ku band, the noise at the receiver is non-Gaussian which may result in non-random errors output by the demodulator. That is, the downlink channel cannot be modeled by a simple memoryless Gaussian-noise channel. From previous experience, it is believed that those errors are bursty. The research proceeded by developing a computer based simulation, called Communication Link Error ANalysis (CLEAN), to model the downlink errors, forward error correcting schemes, and interleavers used with TDRSS. To date, the bulk of CLEAN was written, documented, debugged, and verified. The procedures for utilizing CLEAN to investigate code performance were established and are discussed
    corecore