710 research outputs found
Binary Message Passing Decoding of Product-like Codes
We propose a novel binary message passing decoding algorithm for product-like
codes based on bounded distance decoding (BDD) of the component codes. The
algorithm, dubbed iterative BDD with scaled reliability (iBDD-SR), exploits the
channel reliabilities and is therefore soft in nature. However, the messages
exchanged by the component decoders are binary (hard) messages, which
significantly reduces the decoder data flow. The exchanged binary messages are
obtained by combining the channel reliability with the BDD decoder output
reliabilities, properly conveyed by a scaling factor applied to the BDD
decisions. We perform a density evolution analysis for generalized low-density
parity-check (GLDPC) code ensembles and spatially coupled GLDPC code ensembles,
from which the scaling factors of the iBDD-SR for product and staircase codes,
respectively, can be obtained. For the white additive Gaussian noise channel,
we show performance gains up to dB and dB for product and
staircase codes compared to conventional iterative BDD (iBDD) with the same
decoder data flow. Furthermore, we show that iBDD-SR approaches the performance
of ideal iBDD that prevents miscorrections.Comment: Accepted for publication in the IEEE Transactions on Communication
Binary Message Passing Decoding of Product Codes Based on Generalized Minimum Distance Decoding
We propose a binary message passing decoding algorithm for product codes
based on generalized minimum distance decoding (GMDD) of the component codes,
where the last stage of the GMDD makes a decision based on the Hamming distance
metric. The proposed algorithm closes half of the gap between conventional
iterative bounded distance decoding (iBDD) and turbo product decoding based on
the Chase--Pyndiah algorithm, at the expense of some increase in complexity.
Furthermore, the proposed algorithm entails only a limited increase in data
flow compared to iBDD.Comment: Invited paper to the 53rd Annual Conference on Information Sciences
and Systems (CISS), Baltimore, MD, March 2019. arXiv admin note: text overlap
with arXiv:1806.1090
Approaching Capacity at High-Rates with Iterative Hard-Decision Decoding
A variety of low-density parity-check (LDPC) ensembles have now been observed
to approach capacity with message-passing decoding. However, all of them use
soft (i.e., non-binary) messages and a posteriori probability (APP) decoding of
their component codes. In this paper, we show that one can approach capacity at
high rates using iterative hard-decision decoding (HDD) of generalized product
codes. Specifically, a class of spatially-coupled GLDPC codes with BCH
component codes is considered, and it is observed that, in the high-rate
regime, they can approach capacity under the proposed iterative HDD. These
codes can be seen as generalized product codes and are closely related to
braided block codes. An iterative HDD algorithm is proposed that enables one to
analyze the performance of these codes via density evolution (DE).Comment: 22 pages, this version accepted to the IEEE Transactions on
Information Theor
Density Evolution for Deterministic Generalized Product Codes with Higher-Order Modulation
Generalized product codes (GPCs) are extensions of product codes (PCs) where
coded bits are protected by two component codes but not necessarily arranged in
a rectangular array. It has recently been shown that there exists a large class
of deterministic GPCs (including, e.g., irregular PCs, half-product codes,
staircase codes, and certain braided codes) for which the asymptotic
performance under iterative bounded-distance decoding over the binary erasure
channel (BEC) can be rigorously characterized in terms of a density evolution
analysis. In this paper, the analysis is extended to the case where
transmission takes place over parallel BECs with different erasure
probabilities. We use this model to predict the code performance in a coded
modulation setup with higher-order signal constellations. We also discuss the
design of the bit mapper that determines the allocation of the coded bits to
the modulation bits of the signal constellation.Comment: invited and accepted paper for the special session "Recent Advances
in Coding for Higher Order Modulation" at the International Symposium on
Turbo Codes & Iterative Information Processing, Brest, France, 201
Polytope of Correct (Linear Programming) Decoding and Low-Weight Pseudo-Codewords
We analyze Linear Programming (LP) decoding of graphical binary codes
operating over soft-output, symmetric and log-concave channels. We show that
the error-surface, separating domain of the correct decoding from domain of the
erroneous decoding, is a polytope. We formulate the problem of finding the
lowest-weight pseudo-codeword as a non-convex optimization (maximization of a
convex function) over a polytope, with the cost function defined by the channel
and the polytope defined by the structure of the code. This formulation
suggests new provably convergent heuristics for finding the lowest weight
pseudo-codewords improving in quality upon previously discussed. The algorithm
performance is tested on the example of the Tanner [155, 64, 20] code over the
Additive White Gaussian Noise (AWGN) channel.Comment: 6 pages, 2 figures, accepted for IEEE ISIT 201
Low-Floor Tanner Codes via Hamming-Node or RSCC-Node Doping
We study the design of structured Tanner codes with low error-rate floors on the AWGN channel. The design technique involves the “doping” of standard LDPC (proto-)graphs, by which we mean Hamming or recursive systematic convolutional (RSC) code constraints are used together with single-parity-check (SPC) constraints to construct a code’s protograph. We show that the doping of a “good” graph with Hamming or RSC codes is a pragmatic approach that frequently results in a code with a good threshold and very low error-rate floor. We focus on low-rate Tanner codes, in part because the design of low-rate, low-floor LDPC codes is particularly difficult. Lastly, we perform a simple complexity analysis of our Tanner codes and examine the performance of lower-complexity, suboptimal Hamming-node decoders
- …