171 research outputs found
A New Chase-type Soft-decision Decoding Algorithm for Reed-Solomon Codes
This paper addresses three relevant issues arising in designing Chase-type
algorithms for Reed-Solomon codes: 1) how to choose the set of testing
patterns; 2) given the set of testing patterns, what is the optimal testing
order in the sense that the most-likely codeword is expected to appear earlier;
and 3) how to identify the most-likely codeword. A new Chase-type soft-decision
decoding algorithm is proposed, referred to as tree-based Chase-type algorithm.
The proposed algorithm takes the set of all vectors as the set of testing
patterns, and hence definitely delivers the most-likely codeword provided that
the computational resources are allowed. All the testing patterns are arranged
in an ordered rooted tree according to the likelihood bounds of the possibly
generated codewords. While performing the algorithm, the ordered rooted tree is
constructed progressively by adding at most two leafs at each trial. The
ordered tree naturally induces a sufficient condition for the most-likely
codeword. That is, whenever the proposed algorithm exits before a preset
maximum number of trials is reached, the output codeword must be the
most-likely one. When the proposed algorithm is combined with Guruswami-Sudan
(GS) algorithm, each trial can be implement in an extremely simple way by
removing one old point and interpolating one new point. Simulation results show
that the proposed algorithm performs better than the recently proposed
Chase-type algorithm by Bellorado et al with less trials given that the maximum
number of trials is the same. Also proposed are simulation-based performance
bounds on the MLD algorithm, which are utilized to illustrate the
near-optimality of the proposed algorithm in the high SNR region. In addition,
the proposed algorithm admits decoding with a likelihood threshold, that
searches the most-likely codeword within an Euclidean sphere rather than a
Hamming sphere
Application of Module to Coding Theory: A Systematic Literature Review
A systematic literature review is a research process that identifies,
evaluates, and interprets all relevant study findings connected to specific
research questions, topics, or phenomena of interest. In this work, a thorough
review of the literature on the issue of the link between module structure and
coding theory was done. A literature search yielded 470 articles from the
Google Scholar, Dimensions, and Science Direct databases. After further article
selection process, 14 articles were chosen to be studied in further depth. The
items retrieved were from the previous ten years, from 2012 to 2022. The PRISMA
analytical approach and bibliometric analysis were employed in this
investigation. A more detailed description of the PRISMA technique and the
significance of the bibliometric analysis is provided. The findings of this
study are presented in the form of brief summaries of the 14 articles and
research recommendations. At the end of the study, recommendations for future
development of the code structure utilized in the articles that are further
investigated are made
PARALLEL SUBSPACE SUBCODES OF REED-SOLOMON CODES FOR MAGNETIC RECORDING CHANNELS
Read channel architectures based on a single low-density parity-check (LDPC) code are being considered for the next generation of hard disk drives. However, LDPC-only solutions suffer from the error floor problem, which may compromise reliability, if not handled properly. Concatenated architectures using an LDPC code plus a Reed-Solomon (RS) code lower the error-floor at high signal-to-noise ratio (SNR) at the price of a reduced coding gain and a less sharp waterfall region at lower SNR. This architecture fails to deal with the error floor problem when the number of errors caused by multiple dominant trapping sets is beyond the error correction capability of the outer RS code. The ultimate goal of a sharper waterfall at the low SNR region and a lower error floor at high SNR can be approached by introducing a parallel subspace subcode RS (SSRS) code (PSSRS) to replace the conventional RS code. In this new LDPC+PSSRS system, the PSSRS code can help localize and partially destroy the most dominant trapping sets. With the proposed iterative parallel local decoding algorithm, the LDPC decoder can correct the remaining errors by itself. The contributions of this work are: 1) We propose a PSSRS code with parallel local SSRS structure and a three-level decoding architecture, which enables a trade off between performance and complexity; 2) We propose a new LDPC+PSSRS system with a new iterative parallel local decoding algorithm with a 0.5dB+ gain over the conventional two-level system. Its performance for 4K-byte sectors is close to the multiple LDPC-only architectures for perpendicular magneticxviiirecording channels; 3) We develop a new decoding concept that changes the major role of the RS code from error correcting to a "partial" trapping set destroyer
Reed-Solomon turbo product codes for optical communications: from code optimization to decoder design
International audienceTurbo product codes (TPCs) are an attractive solution to improve link budgets and reduce systems costs by relaxing the requirements on expensive optical devices in high capacity optical transport systems. In this paper, we investigate the use of Reed-Solomon (RS) turbo product codes for 40 Gbps transmission over optical transport networks and 10 Gbps transmission over passive optical networks. An algorithmic study is first performed in order to design RS TPCs that are compatible with the performance requirements imposed by the two applications. Then, a novel ultrahigh-speed parallel architecture for turbo decoding of product codes is described. A comparison with binary Bose-Chaudhuri-Hocquenghem (BCH) TPCs is performed. The results show that high-rate RS TPCs offer a better complexity/performance tradeoff than BCH TPCs for low-cost Gbps fiber optic communications
A STUDY OF ERASURE CORRECTING CODES
This work focus on erasure codes, particularly those that of high performance,
and the related decoding algorithms, especially with low
computational complexity. The work is composed of different pieces,
but the main components are developed within the following two main
themes.
Ideas of message passing are applied to solve the erasures after the
transmission. Efficient matrix-representation of the belief propagation
(BP) decoding algorithm on the BEG is introduced as the recovery
algorithm. Gallager's bit-flipping algorithm are further developed
into the guess and multi-guess algorithms especially for the
application to recover the unsolved erasures after the recovery algorithm.
A novel maximum-likelihood decoding algorithm, the In-place
algorithm, is proposed with a reduced computational complexity. A
further study on the marginal number of correctable erasures by the
In-place algoritinn determines a lower bound of the average number
of correctable erasures. Following the spirit in search of the most likable
codeword based on the received vector, we propose a new branch-evaluation-
search-on-the-code-tree (BESOT) algorithm, which is powerful
enough to approach the ML performance for all linear block
codes.
To maximise the recovery capability of the In-place algorithm in
network transmissions, we propose the product packetisation structure
to reconcile the computational complexity of the In-place algorithm.
Combined with the proposed product packetisation structure,
the computational complexity is less than the quadratic complexity
bound. We then extend this to application of the Rayleigh fading
channel to solve the errors and erasures. By concatenating an outer
code, such as BCH codes, the product-packetised RS codes have the
performance of the hard-decision In-place algorithm significantly better
than that of the soft-decision iterative algorithms on optimally
designed LDPC codes
Error-Correction Coding and Decoding: Bounds, Codes, Decoders, Analysis and Applications
Coding; Communications; Engineering; Networks; Information Theory; Algorithm
A STUDY OF LINEAR ERROR CORRECTING CODES
Since Shannon's ground-breaking work in 1948, there have been two main development streams
of channel coding in approaching the limit of communication channels, namely classical coding
theory which aims at designing codes with large minimum Hamming distance and probabilistic
coding which places the emphasis on low complexity probabilistic decoding using long codes built
from simple constituent codes. This work presents some further investigations in these two channel
coding development streams.
Low-density parity-check (LDPC) codes form a class of capacity-approaching codes with sparse
parity-check matrix and low-complexity decoder Two novel methods of constructing algebraic binary
LDPC codes are presented. These methods are based on the theory of cyclotomic cosets, idempotents
and Mattson-Solomon polynomials, and are complementary to each other. The two methods
generate in addition to some new cyclic iteratively decodable codes, the well-known Euclidean and
projective geometry codes. Their extension to non binary fields is shown to be straightforward.
These algebraic cyclic LDPC codes, for short block lengths, converge considerably well under iterative
decoding. It is also shown that for some of these codes, maximum likelihood performance may
be achieved by a modified belief propagation decoder which uses a different subset of 7^ codewords
of the dual code for each iteration.
Following a property of the revolving-door combination generator, multi-threaded minimum
Hamming distance computation algorithms are developed. Using these algorithms, the previously
unknown, minimum Hamming distance of the quadratic residue code for prime 199 has been evaluated.
In addition, the highest minimum Hamming distance attainable by all binary cyclic codes
of odd lengths from 129 to 189 has been determined, and as many as 901 new binary linear codes
which have higher minimum Hamming distance than the previously considered best known linear
code have been found.
It is shown that by exploiting the structure of circulant matrices, the number of codewords
required, to compute the minimum Hamming distance and the number of codewords of a given
Hamming weight of binary double-circulant codes based on primes, may be reduced. A means
of independently verifying the exhaustively computed number of codewords of a given Hamming
weight of these double-circulant codes is developed and in coiyunction with this, it is proved that
some published results are incorrect and the correct weight spectra are presented. Moreover, it is
shown that it is possible to estimate the minimum Hamming distance of this family of prime-based
double-circulant codes.
It is shown that linear codes may be efficiently decoded using the incremental correlation Dorsch
algorithm. By extending this algorithm, a list decoder is derived and a novel, CRC-less error detection
mechanism that offers much better throughput and performance than the conventional ORG
scheme is described. Using the same method it is shown that the performance of conventional CRC
scheme may be considerably enhanced. Error detection is an integral part of an incremental redundancy
communications system and it is shown that sequences of good error correction codes,
suitable for use in incremental redundancy communications systems may be obtained using the
Constructions X and XX. Examples are given and their performances presented in comparison to
conventional CRC schemes
- …