7,627 research outputs found
Distance Properties of Short LDPC Codes and their Impact on the BP, ML and Near-ML Decoding Performance
Parameters of LDPC codes, such as minimum distance, stopping distance,
stopping redundancy, girth of the Tanner graph, and their influence on the
frame error rate performance of the BP, ML and near-ML decoding over a BEC and
an AWGN channel are studied. Both random and structured LDPC codes are
considered. In particular, the BP decoding is applied to the code parity-check
matrices with an increasing number of redundant rows, and the convergence of
the performance to that of the ML decoding is analyzed. A comparison of the
simulated BP, ML, and near-ML performance with the improved theoretical bounds
on the error probability based on the exact weight spectrum coefficients and
the exact stopping size spectrum coefficients is presented. It is observed that
decoding performance very close to the ML decoding performance can be achieved
with a relatively small number of redundant rows for some codes, for both the
BEC and the AWGN channels
Permutation Decoding and the Stopping Redundancy Hierarchy of Cyclic and Extended Cyclic Codes
We introduce the notion of the stopping redundancy hierarchy of a linear
block code as a measure of the trade-off between performance and complexity of
iterative decoding for the binary erasure channel. We derive lower and upper
bounds for the stopping redundancy hierarchy via Lovasz's Local Lemma and
Bonferroni-type inequalities, and specialize them for codes with cyclic
parity-check matrices. Based on the observed properties of parity-check
matrices with good stopping redundancy characteristics, we develop a novel
decoding technique, termed automorphism group decoding, that combines iterative
message passing and permutation decoding. We also present bounds on the
smallest number of permutations of an automorphism group decoder needed to
correct any set of erasures up to a prescribed size. Simulation results
demonstrate that for a large number of algebraic codes, the performance of the
new decoding method is close to that of maximum likelihood decoding.Comment: 40 pages, 6 figures, 10 tables, submitted to IEEE Transactions on
Information Theor
Refined Upper Bounds on Stopping Redundancy of Binary Linear Codes
The -th stopping redundancy of the binary
code , , is defined as the minimum number of rows in
the parity-check matrix of , such that the smallest stopping set is
of size at least . The stopping redundancy is defined as
. In this work, we improve on the probabilistic analysis of
stopping redundancy, proposed by Han, Siegel and Vardy, which yields the best
bounds known today. In our approach, we judiciously select the first few rows
in the parity-check matrix, and then continue with the probabilistic method. By
using similar techniques, we improve also on the best known bounds on
, for . Our approach is compared to the
existing methods by numerical computations.Comment: 5 pages; ITW 201
The Trapping Redundancy of Linear Block Codes
We generalize the notion of the stopping redundancy in order to study the
smallest size of a trapping set in Tanner graphs of linear block codes. In this
context, we introduce the notion of the trapping redundancy of a code, which
quantifies the relationship between the number of redundant rows in any
parity-check matrix of a given code and the size of its smallest trapping set.
Trapping sets with certain parameter sizes are known to cause error-floors in
the performance curves of iterative belief propagation decoders, and it is
therefore important to identify decoding matrices that avoid such sets. Bounds
on the trapping redundancy are obtained using probabilistic and constructive
methods, and the analysis covers both general and elementary trapping sets.
Numerical values for these bounds are computed for the [2640,1320] Margulis
code and the class of projective geometry codes, and compared with some new
code-specific trapping set size estimates.Comment: 12 pages, 4 tables, 1 figure, accepted for publication in IEEE
Transactions on Information Theor
Feedback Communication Systems with Limitations on Incremental Redundancy
This paper explores feedback systems using incremental redundancy (IR) with
noiseless transmitter confirmation (NTC). For IR-NTC systems based on {\em
finite-length} codes (with blocklength ) and decoding attempts only at {\em
certain specified decoding times}, this paper presents the asymptotic expansion
achieved by random coding, provides rate-compatible sphere-packing (RCSP)
performance approximations, and presents simulation results of tail-biting
convolutional codes.
The information-theoretic analysis shows that values of relatively close
to the expected latency yield the same random-coding achievability expansion as
with . However, the penalty introduced in the expansion by limiting
decoding times is linear in the interval between decoding times. For binary
symmetric channels, the RCSP approximation provides an efficiently-computed
approximation of performance that shows excellent agreement with a family of
rate-compatible, tail-biting convolutional codes in the short-latency regime.
For the additive white Gaussian noise channel, bounded-distance decoding
simplifies the computation of the marginal RCSP approximation and produces
similar results as analysis based on maximum-likelihood decoding for latencies
greater than 200. The efficiency of the marginal RCSP approximation facilitates
optimization of the lengths of incremental transmissions when the number of
incremental transmissions is constrained to be small or the length of the
incremental transmissions is constrained to be uniform after the first
transmission. Finally, an RCSP-based decoding error trajectory is introduced
that provides target error rates for the design of rate-compatible code
families for use in feedback communication systems.Comment: 23 pages, 15 figure
- …