1,157 research outputs found
A Simplified Min-Sum Decoding Algorithm for Non-Binary LDPC Codes
Non-binary low-density parity-check codes are robust to various channel
impairments. However, based on the existing decoding algorithms, the decoder
implementations are expensive because of their excessive computational
complexity and memory usage. Based on the combinatorial optimization, we
present an approximation method for the check node processing. The simulation
results demonstrate that our scheme has small performance loss over the
additive white Gaussian noise channel and independent Rayleigh fading channel.
Furthermore, the proposed reduced-complexity realization provides significant
savings on hardware, so it yields a good performance-complexity tradeoff and
can be efficiently implemented.Comment: Partially presented in ICNC 2012, International Conference on
Computing, Networking and Communications. Accepted by IEEE Transactions on
Communication
Complexity Comparison of Non-Binary LDPC Decoders
International audienceThis paper presents a detailed complexity study of the existing non-binary LDPC decoding algorithms in order to rigorously compare them from a hardware perspective. The Belief Propagation algorithm is first considered as well as its derivative versions in the frequency and logarithm domains. We then focus on the Extended Min-Sum and its recent simplified version. For each algorithm, the number of operations in an elementary step of the check and variable nodes is determined. Finally we evaluate the interest of the application of the simplified Extended Min-Sum algorithm to a new family of non-binary LDPC codes designed in the framework of the DaVinci projec
Single-Scan Min-Sum Algorithms for Fast Decoding of LDPC Codes
Many implementations for decoding LDPC codes are based on the
(normalized/offset) min-sum algorithm due to its satisfactory performance and
simplicity in operations. Usually, each iteration of the min-sum algorithm
contains two scans, the horizontal scan and the vertical scan. This paper
presents a single-scan version of the min-sum algorithm to speed up the
decoding process. It can also reduce memory usage or wiring because it only
needs the addressing from check nodes to variable nodes while the original
min-sum algorithm requires that addressing plus the addressing from variable
nodes to check nodes. To cut down memory usage or wiring further, another
version of the single-scan min-sum algorithm is presented where the messages of
the algorithm are represented by single bit values instead of using fixed point
ones. The software implementation has shown that the single-scan min-sum
algorithm is more than twice as fast as the original min-sum algorithm.Comment: Accepted by IEEE Information Theory Workshop, Chengdu, China, 200
Deriving the Normalized Min-Sum Algorithm from Cooperative Optimization
The normalized min-sum algorithm can achieve near-optimal performance at
decoding LDPC codes. However, it is a critical question to understand the
mathematical principle underlying the algorithm. Traditionally, people thought
that the normalized min-sum algorithm is a good approximation to the
sum-product algorithm, the best known algorithm for decoding LDPC codes and
Turbo codes. This paper offers an alternative approach to understand the
normalized min-sum algorithm. The algorithm is derived directly from
cooperative optimization, a newly discovered general method for
global/combinatorial optimization. This approach provides us another
theoretical basis for the algorithm and offers new insights on its power and
limitation. It also gives us a general framework for designing new decoding
algorithms.Comment: Accepted by IEEE Information Theory Workshop, Chengdu, China, 200
Fast Min-Sum Algorithms for Decoding of LDPC over GF(q)
In this paper, we present a fast min-sum algorithm for decoding LDPC codes
over GF(q). Our algorithm is different from the one presented by David Declercq
and Marc Fossorier in ISIT 05 only at the way of speeding up the horizontal
scan in the min-sum algorithm. The Declercq and Fossorier's algorithm speeds up
the computation by reducing the number of configurations, while our algorithm
uses the dynamic programming instead. Compared with the configuration reduction
algorithm, the dynamic programming one is simpler at the design stage because
it has less parameters to tune. Furthermore, it does not have the performance
degradation problem caused by the configuration reduction because it searches
the whole configuration space efficiently through dynamic programming. Both
algorithms have the same level of complexity and use simple operations which
are suitable for hardware implementations.Comment: Accepted by IEEE Information Theory Workshop, Chengdu, China, 200
- …