2,179 research outputs found
Relaxed Half-Stochastic Belief Propagation
Low-density parity-check codes are attractive for high throughput
applications because of their low decoding complexity per bit, but also because
all the codeword bits can be decoded in parallel. However, achieving this in a
circuit implementation is complicated by the number of wires required to
exchange messages between processing nodes. Decoding algorithms that exchange
binary messages are interesting for fully-parallel implementations because they
can reduce the number and the length of the wires, and increase logic density.
This paper introduces the Relaxed Half-Stochastic (RHS) decoding algorithm, a
binary message belief propagation (BP) algorithm that achieves a coding gain
comparable to the best known BP algorithms that use real-valued messages. We
derive the RHS algorithm by starting from the well-known Sum-Product algorithm,
and then derive a low-complexity version suitable for circuit implementation.
We present extensive simulation results on two standardized codes having
different rates and constructions, including low bit error rate results. These
simulations show that RHS can be an advantageous replacement for the existing
state-of-the-art decoding algorithms when targeting fully-parallel
implementations
Single-Scan Min-Sum Algorithms for Fast Decoding of LDPC Codes
Many implementations for decoding LDPC codes are based on the
(normalized/offset) min-sum algorithm due to its satisfactory performance and
simplicity in operations. Usually, each iteration of the min-sum algorithm
contains two scans, the horizontal scan and the vertical scan. This paper
presents a single-scan version of the min-sum algorithm to speed up the
decoding process. It can also reduce memory usage or wiring because it only
needs the addressing from check nodes to variable nodes while the original
min-sum algorithm requires that addressing plus the addressing from variable
nodes to check nodes. To cut down memory usage or wiring further, another
version of the single-scan min-sum algorithm is presented where the messages of
the algorithm are represented by single bit values instead of using fixed point
ones. The software implementation has shown that the single-scan min-sum
algorithm is more than twice as fast as the original min-sum algorithm.Comment: Accepted by IEEE Information Theory Workshop, Chengdu, China, 200
Fast Min-Sum Algorithms for Decoding of LDPC over GF(q)
In this paper, we present a fast min-sum algorithm for decoding LDPC codes
over GF(q). Our algorithm is different from the one presented by David Declercq
and Marc Fossorier in ISIT 05 only at the way of speeding up the horizontal
scan in the min-sum algorithm. The Declercq and Fossorier's algorithm speeds up
the computation by reducing the number of configurations, while our algorithm
uses the dynamic programming instead. Compared with the configuration reduction
algorithm, the dynamic programming one is simpler at the design stage because
it has less parameters to tune. Furthermore, it does not have the performance
degradation problem caused by the configuration reduction because it searches
the whole configuration space efficiently through dynamic programming. Both
algorithms have the same level of complexity and use simple operations which
are suitable for hardware implementations.Comment: Accepted by IEEE Information Theory Workshop, Chengdu, China, 200
Decomposition Methods for Large Scale LP Decoding
When binary linear error-correcting codes are used over symmetric channels, a
relaxed version of the maximum likelihood decoding problem can be stated as a
linear program (LP). This LP decoder can be used to decode error-correcting
codes at bit-error-rates comparable to state-of-the-art belief propagation (BP)
decoders, but with significantly stronger theoretical guarantees. However, LP
decoding when implemented with standard LP solvers does not easily scale to the
block lengths of modern error correcting codes. In this paper we draw on
decomposition methods from optimization theory, specifically the Alternating
Directions Method of Multipliers (ADMM), to develop efficient distributed
algorithms for LP decoding.
The key enabling technical result is a "two-slice" characterization of the
geometry of the parity polytope, which is the convex hull of all codewords of a
single parity check code. This new characterization simplifies the
representation of points in the polytope. Using this simplification, we develop
an efficient algorithm for Euclidean norm projection onto the parity polytope.
This projection is required by ADMM and allows us to use LP decoding, with all
its theoretical guarantees, to decode large-scale error correcting codes
efficiently.
We present numerical results for LDPC codes of lengths more than 1000. The
waterfall region of LP decoding is seen to initiate at a slightly higher
signal-to-noise ratio than for sum-product BP, however an error floor is not
observed for LP decoding, which is not the case for BP. Our implementation of
LP decoding using ADMM executes as fast as our baseline sum-product BP decoder,
is fully parallelizable, and can be seen to implement a type of message-passing
with a particularly simple schedule.Comment: 35 pages, 11 figures. An early version of this work appeared at the
49th Annual Allerton Conference, September 2011. This version to appear in
IEEE Transactions on Information Theor
- …