721 research outputs found
Deriving Good LDPC Convolutional Codes from LDPC Block Codes
Low-density parity-check (LDPC) convolutional codes are capable of achieving
excellent performance with low encoding and decoding complexity. In this paper
we discuss several graph-cover-based methods for deriving families of
time-invariant and time-varying LDPC convolutional codes from LDPC block codes
and show how earlier proposed LDPC convolutional code constructions can be
presented within this framework. Some of the constructed convolutional codes
significantly outperform the underlying LDPC block codes. We investigate some
possible reasons for this "convolutional gain," and we also discuss the ---
mostly moderate --- decoder cost increase that is incurred by going from LDPC
block to LDPC convolutional codes.Comment: Submitted to IEEE Transactions on Information Theory, April 2010;
revised August 2010, revised November 2010 (essentially final version).
(Besides many small changes, the first and second revised versions contain
corrected entries in Tables I and II.
Woven Graph Codes: Asymptotic Performances and Examples
Constructions of woven graph codes based on constituent block and
convolutional codes are studied. It is shown that within the random ensemble of
such codes based on -partite, -uniform hypergraphs, where depends
only on the code rate, there exist codes satisfying the Varshamov-Gilbert (VG)
and the Costello lower bound on the minimum distance and the free distance,
respectively. A connection between regular bipartite graphs and tailbiting
codes is shown. Some examples of woven graph codes are presented. Among them an
example of a rate woven graph code with
based on Heawood's bipartite graph and containing constituent rate
convolutional codes with overall constraint lengths is
given. An encoding procedure for woven graph codes with complexity proportional
to the number of constituent codes and their overall constraint length
is presented.Comment: Submitted to IEEE Trans. Inform. Theor
Distance Properties of Short LDPC Codes and their Impact on the BP, ML and Near-ML Decoding Performance
Parameters of LDPC codes, such as minimum distance, stopping distance,
stopping redundancy, girth of the Tanner graph, and their influence on the
frame error rate performance of the BP, ML and near-ML decoding over a BEC and
an AWGN channel are studied. Both random and structured LDPC codes are
considered. In particular, the BP decoding is applied to the code parity-check
matrices with an increasing number of redundant rows, and the convergence of
the performance to that of the ML decoding is analyzed. A comparison of the
simulated BP, ML, and near-ML performance with the improved theoretical bounds
on the error probability based on the exact weight spectrum coefficients and
the exact stopping size spectrum coefficients is presented. It is observed that
decoding performance very close to the ML decoding performance can be achieved
with a relatively small number of redundant rows for some codes, for both the
BEC and the AWGN channels
Mathematical Programming Decoding of Binary Linear Codes: Theory and Algorithms
Mathematical programming is a branch of applied mathematics and has recently
been used to derive new decoding approaches, challenging established but often
heuristic algorithms based on iterative message passing. Concepts from
mathematical programming used in the context of decoding include linear,
integer, and nonlinear programming, network flows, notions of duality as well
as matroid and polyhedral theory. This survey article reviews and categorizes
decoding methods based on mathematical programming approaches for binary linear
codes over binary-input memoryless symmetric channels.Comment: 17 pages, submitted to the IEEE Transactions on Information Theory.
Published July 201
Construction of Rate (n-1)/n Non-Binary LDPC Convolutional Codes via Difference Triangle Sets
This paper provides a construction of non-binary LDPC convolutional codes,
which generalizes the work of Robinson and Bernstein. The sets of integers
forming an -difference triangle set are used as supports of the
columns of rate convolutional codes. If the field size is large
enough, the Tanner graph associated to the sliding parity-check matrix of the
code is free from and -cycles not satisfying the full rank condition.
This is important for improving the performance of a code and avoiding the
presence of low-weight codewords and absorbing sets. The parameters of the
convolutional code are shown to be determined by the parameters of the
underlying difference triangle set. In particular, the free distance of the
code is related to and the degree of the code is linked to the "scope" of
the difference triangle set. Hence, the problem of finding families of
difference triangle set with minimum scope is equivalent to find convolutional
codes with small degree.Comment: The paper was submitted to ISIT 202
- …