121 research outputs found

    A New Chase-type Soft-decision Decoding Algorithm for Reed-Solomon Codes

    Full text link
    This paper addresses three relevant issues arising in designing Chase-type algorithms for Reed-Solomon codes: 1) how to choose the set of testing patterns; 2) given the set of testing patterns, what is the optimal testing order in the sense that the most-likely codeword is expected to appear earlier; and 3) how to identify the most-likely codeword. A new Chase-type soft-decision decoding algorithm is proposed, referred to as tree-based Chase-type algorithm. The proposed algorithm takes the set of all vectors as the set of testing patterns, and hence definitely delivers the most-likely codeword provided that the computational resources are allowed. All the testing patterns are arranged in an ordered rooted tree according to the likelihood bounds of the possibly generated codewords. While performing the algorithm, the ordered rooted tree is constructed progressively by adding at most two leafs at each trial. The ordered tree naturally induces a sufficient condition for the most-likely codeword. That is, whenever the proposed algorithm exits before a preset maximum number of trials is reached, the output codeword must be the most-likely one. When the proposed algorithm is combined with Guruswami-Sudan (GS) algorithm, each trial can be implement in an extremely simple way by removing one old point and interpolating one new point. Simulation results show that the proposed algorithm performs better than the recently proposed Chase-type algorithm by Bellorado et al with less trials given that the maximum number of trials is the same. Also proposed are simulation-based performance bounds on the MLD algorithm, which are utilized to illustrate the near-optimality of the proposed algorithm in the high SNR region. In addition, the proposed algorithm admits decoding with a likelihood threshold, that searches the most-likely codeword within an Euclidean sphere rather than a Hamming sphere

    Application of Module to Coding Theory: A Systematic Literature Review

    Full text link
    A systematic literature review is a research process that identifies, evaluates, and interprets all relevant study findings connected to specific research questions, topics, or phenomena of interest. In this work, a thorough review of the literature on the issue of the link between module structure and coding theory was done. A literature search yielded 470 articles from the Google Scholar, Dimensions, and Science Direct databases. After further article selection process, 14 articles were chosen to be studied in further depth. The items retrieved were from the previous ten years, from 2012 to 2022. The PRISMA analytical approach and bibliometric analysis were employed in this investigation. A more detailed description of the PRISMA technique and the significance of the bibliometric analysis is provided. The findings of this study are presented in the form of brief summaries of the 14 articles and research recommendations. At the end of the study, recommendations for future development of the code structure utilized in the articles that are further investigated are made

    PARALLEL SUBSPACE SUBCODES OF REED-SOLOMON CODES FOR MAGNETIC RECORDING CHANNELS

    Get PDF
    Read channel architectures based on a single low-density parity-check (LDPC) code are being considered for the next generation of hard disk drives. However, LDPC-only solutions suffer from the error floor problem, which may compromise reliability, if not handled properly. Concatenated architectures using an LDPC code plus a Reed-Solomon (RS) code lower the error-floor at high signal-to-noise ratio (SNR) at the price of a reduced coding gain and a less sharp waterfall region at lower SNR. This architecture fails to deal with the error floor problem when the number of errors caused by multiple dominant trapping sets is beyond the error correction capability of the outer RS code. The ultimate goal of a sharper waterfall at the low SNR region and a lower error floor at high SNR can be approached by introducing a parallel subspace subcode RS (SSRS) code (PSSRS) to replace the conventional RS code. In this new LDPC+PSSRS system, the PSSRS code can help localize and partially destroy the most dominant trapping sets. With the proposed iterative parallel local decoding algorithm, the LDPC decoder can correct the remaining errors by itself. The contributions of this work are: 1) We propose a PSSRS code with parallel local SSRS structure and a three-level decoding architecture, which enables a trade off between performance and complexity; 2) We propose a new LDPC+PSSRS system with a new iterative parallel local decoding algorithm with a 0.5dB+ gain over the conventional two-level system. Its performance for 4K-byte sectors is close to the multiple LDPC-only architectures for perpendicular magneticxviiirecording channels; 3) We develop a new decoding concept that changes the major role of the RS code from error correcting to a "partial" trapping set destroyer

    List Decoding of Algebraic Codes

    Get PDF

    Error-Correction Coding and Decoding: Bounds, Codes, Decoders, Analysis and Applications

    Get PDF
    Coding; Communications; Engineering; Networks; Information Theory; Algorithm

    A STUDY OF LINEAR ERROR CORRECTING CODES

    Get PDF
    Since Shannon's ground-breaking work in 1948, there have been two main development streams of channel coding in approaching the limit of communication channels, namely classical coding theory which aims at designing codes with large minimum Hamming distance and probabilistic coding which places the emphasis on low complexity probabilistic decoding using long codes built from simple constituent codes. This work presents some further investigations in these two channel coding development streams. Low-density parity-check (LDPC) codes form a class of capacity-approaching codes with sparse parity-check matrix and low-complexity decoder Two novel methods of constructing algebraic binary LDPC codes are presented. These methods are based on the theory of cyclotomic cosets, idempotents and Mattson-Solomon polynomials, and are complementary to each other. The two methods generate in addition to some new cyclic iteratively decodable codes, the well-known Euclidean and projective geometry codes. Their extension to non binary fields is shown to be straightforward. These algebraic cyclic LDPC codes, for short block lengths, converge considerably well under iterative decoding. It is also shown that for some of these codes, maximum likelihood performance may be achieved by a modified belief propagation decoder which uses a different subset of 7^ codewords of the dual code for each iteration. Following a property of the revolving-door combination generator, multi-threaded minimum Hamming distance computation algorithms are developed. Using these algorithms, the previously unknown, minimum Hamming distance of the quadratic residue code for prime 199 has been evaluated. In addition, the highest minimum Hamming distance attainable by all binary cyclic codes of odd lengths from 129 to 189 has been determined, and as many as 901 new binary linear codes which have higher minimum Hamming distance than the previously considered best known linear code have been found. It is shown that by exploiting the structure of circulant matrices, the number of codewords required, to compute the minimum Hamming distance and the number of codewords of a given Hamming weight of binary double-circulant codes based on primes, may be reduced. A means of independently verifying the exhaustively computed number of codewords of a given Hamming weight of these double-circulant codes is developed and in coiyunction with this, it is proved that some published results are incorrect and the correct weight spectra are presented. Moreover, it is shown that it is possible to estimate the minimum Hamming distance of this family of prime-based double-circulant codes. It is shown that linear codes may be efficiently decoded using the incremental correlation Dorsch algorithm. By extending this algorithm, a list decoder is derived and a novel, CRC-less error detection mechanism that offers much better throughput and performance than the conventional ORG scheme is described. Using the same method it is shown that the performance of conventional CRC scheme may be considerably enhanced. Error detection is an integral part of an incremental redundancy communications system and it is shown that sequences of good error correction codes, suitable for use in incremental redundancy communications systems may be obtained using the Constructions X and XX. Examples are given and their performances presented in comparison to conventional CRC schemes

    NASA Tech Briefs, April 1992

    Get PDF
    Topics covered include: New Product Ideas; Electronic Components and Circuits; Electronic Systems; Physical Sciences; Materials; Computer Programs; Mechanics; Machinery; Fabrication Technology; Mathematics and Information Sciences
    • …
    corecore