100 research outputs found
Graph-Based Decoding in the Presence of ISI
We propose an approximation of maximum-likelihood detection in ISI channels
based on linear programming or message passing. We convert the detection
problem into a binary decoding problem, which can be easily combined with LDPC
decoding. We show that, for a certain class of channels and in the absence of
coding, the proposed technique provides the exact ML solution without an
exponential complexity in the size of channel memory, while for some other
channels, this method has a non-diminishing probability of failure as SNR
increases. Some analysis is provided for the error events of the proposed
technique under linear programming.Comment: 25 pages, 8 figures, Submitted to IEEE Transactions on Information
Theor
Efficient iterative decoding algorithms for turbo and low-density parity-check (LDPC) codes
EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Evaluation of flexible SPA based LPDC decoder using hardware friendly approximation methods
Due to computation-intensive nature of LDPC decoders, a lot of research is going towards efficient implementation of their original algorithm (SPA). As "Min-Sum" approximation is basically an overestimation of SPA, this thesis investigates more accurate, yet area efficient, approximations of SPA, to select an optimum one. In a general comparison between main approximation methods (e.g. LUT, PWL, CRI), PWL showed the most area-efficiency. Studying different mathematical formats of SPA, Soft-XOR based format with forward-backward scheme was chosen for hard- ware implementation. Its core function (Soft-XOR) was implemented with CRI approximation, which achieved the highest efficiency, compare to other approxi- mations. Using this core function, a flexible, pipe-lined, Soft-XOR based CNU (the computational unit of LDPC decoders) with forward-backward architecture was developed in 18nm CMOS. The implemented CNU’s area and speed can eas- ily be changed in instantiation. A SPA decoder based on the developed CNU was estimated to have an area of 1.6M as equivalent gate count and a throughput of 10Gb/s, with a frequency of 1.25GHz and for 10 iterations. The decoder uses IEEE 802.11n Wi-Fi standard with flooding schedule. The BER/SNR loss, com- pare to floating-point SPA, is 0.3dB for 10 iterations and less than 0.1dB for 20 iterations.You have to get lost before you can be found, a quote by Jeff Rasley goes very well for Low Density Parity Check (LDPC) codes. First invented by Gallager in 1962 but kind of lost during the journey of evolution of telecommunication networks because of their high complexity and demanding computations, which technology was not so advanced to handle, at that time. However, during late 1990s, success of turbo codes invoked the re-discovery of Low Density Parity Check (LDPC) codes. Recently it has attracted tremendous research interest among the scientific com- munity, as today’s technology is advanced enough and to make LDPC decoders completely commercial. In a wireless network, the information is not just sim- ply sent, but first encoded. In a sense, all the transmitted bits are tied together, according to some mathematical rules. Therefore, if noise destructs parts of the information while traveling, the LDPC decoder at the receiver side, can automat- ically detect and retrieve those parts, based on the other parts. Here, our main focus is on the decoder. For actual hardware implementation of the decoder, some level of approximation of the ideal algorithm is always necessary, which reduces the accuracy depending on the approximation. Ericsson is developing the next-generation wireless network for 5G, and already possesses the "Min-Sum" approximation of the LDPC decoder. As the current requirements demand more accurate decoders, the goal of this thesis is to evalu- ate a more accurate but more costly version of the LDPC decoder, as well as its flexibility. Thus, several candidates were selected and evaluated based on their complexity, cost, and their accuracy towards error correction. After performing several trade-offs, an approximation method is chosen and the corresponding cost is derived. With this acquired data, a trade-off between accuracy and cost can be made, depending on the application
Decomposition Methods for Large Scale LP Decoding
When binary linear error-correcting codes are used over symmetric channels, a
relaxed version of the maximum likelihood decoding problem can be stated as a
linear program (LP). This LP decoder can be used to decode error-correcting
codes at bit-error-rates comparable to state-of-the-art belief propagation (BP)
decoders, but with significantly stronger theoretical guarantees. However, LP
decoding when implemented with standard LP solvers does not easily scale to the
block lengths of modern error correcting codes. In this paper we draw on
decomposition methods from optimization theory, specifically the Alternating
Directions Method of Multipliers (ADMM), to develop efficient distributed
algorithms for LP decoding.
The key enabling technical result is a "two-slice" characterization of the
geometry of the parity polytope, which is the convex hull of all codewords of a
single parity check code. This new characterization simplifies the
representation of points in the polytope. Using this simplification, we develop
an efficient algorithm for Euclidean norm projection onto the parity polytope.
This projection is required by ADMM and allows us to use LP decoding, with all
its theoretical guarantees, to decode large-scale error correcting codes
efficiently.
We present numerical results for LDPC codes of lengths more than 1000. The
waterfall region of LP decoding is seen to initiate at a slightly higher
signal-to-noise ratio than for sum-product BP, however an error floor is not
observed for LP decoding, which is not the case for BP. Our implementation of
LP decoding using ADMM executes as fast as our baseline sum-product BP decoder,
is fully parallelizable, and can be seen to implement a type of message-passing
with a particularly simple schedule.Comment: 35 pages, 11 figures. An early version of this work appeared at the
49th Annual Allerton Conference, September 2011. This version to appear in
IEEE Transactions on Information Theor
Low Complexity Rate Compatible Puncturing Patterns Design for LDPC Codes
In contemporary digital communications design, two major challenges should be addressed: adaptability and flexibility. The system should be capable of flexible and efficient use of all available spectrums and should be adaptable to provide efficient support for the diverse set of service characteristics. These needs imply the necessity of limit-achieving and flexible channel coding techniques, to improve system reliability. Low Density Parity Check (LDPC) codes fit such requirements well, since they are capacity-achieving. Moreover, through puncturing, allowing the adaption of the coding rate to different channel conditions with a single encoder/decoder pair, adaptability and flexibility can be obtained at a low computational cost.In this paper, the design of rate-compatible puncturing patterns for LDPCs is addressed. We use a previously defined formal analysis of a class of punctured LDPC codes through their equivalent parity check matrices. We address a new design criterion for the puncturing patterns using a simplified analysis of the decoding belief propagation algorithm, i.e., considering a Gaussian approximation for message densities under density evolution, and a simple algorithmic method, recently defined by the Authors, to estimate the threshold for regular and irregular LDPC codes on memoryless binary-input continuous-output Additive White Gaussian Noise (AWGN) channels
Low Density Graph Codes And Novel Optimization Strategies For Information Transfer Over Impaired Medium
Effective methods for information transfer over an imperfect medium are of great interest. This thesis addresses the following four topics involving low density graph codes and novel optimization strategies.Firstly, we study the performance of a promising coding technique: low density generator matrix (LDGM) codes. LDGM codes provide satisfying performance while maintaining low encoding and decoding complexities. In the thesis, the performance of LDGM codes is extracted for both majority-rule-based and sum-product iterative decoding algorithms. The ultimate performance of the coding scheme is revealed through distance spectrum analysis. We derive the distance spectral for both LDGM codes and concatenated LDGM codes. The results show that serial-concatenated LDGM codes deliver extremely low error-floors. This work provides valued information for selecting the parameters of LDGM codes. Secondly, we investigate network-coding on relay-assisted wireless multiple access (WMA) networks. Network-coding is an effective way to increase robustness and traffic capacity of networks. Following the framework of network-coding, we introduce new network codes for the WMA networks. The codes are constructed based on sparse graphs, and can explore the diversities available from both the time and space domains. The data integrity from relays could be compromised when the relays are deployed in open areas. For this, we propose a simple but robust security mechanism to verify the data integrity.Thirdly, we study the problem of bandwidth allocation for the transmission of multiple sources of data over a single communication medium. We aim to maximize the overall user satisfaction, and formulate an optimization problem. Using either the logarithmic or exponential form of satisfaction function, we derive closed-form optimal solutions, and show that the optimal bandwidth allocation for each type of data is piecewise linear with respect to the total available bandwidth. Fourthly, we consider the optimization strategy on recovery of target spectrum for filter-array-based spectrometers. We model the spectrophotometric system as a communication system, in which the information content of the target spectrum is passed through distortive filters. By exploiting non-negative nature of spectral content, a non-negative least-square optimal criterion is found particularly effective. The concept is verified in a hardware implemen
Throughput-based Design for Polar Coded-Modulation
Typically, forward error correction (FEC) codes are designed based on the
minimization of the error rate for a given code rate. However, for applications
that incorporate hybrid automatic repeat request (HARQ) protocol and adaptive
modulation and coding, the throughput is a more important performance metric
than the error rate. Polar codes, a new class of FEC codes with simple rate
matching, can be optimized efficiently for maximization of the throughput. In
this paper, we aim to design HARQ schemes using multilevel polar
coded-modulation (MLPCM). Thus, we first develop a method to determine a
set-partitioning based bit-to-symbol mapping for high order QAM constellations.
We simplify the LLR estimation of set-partitioned QAM constellations for a
multistage decoder, and we introduce a set of algorithms to design
throughput-maximizing MLPCM for the successive cancellation decoding (SCD).
These codes are specifically useful for non-combining (NC) and Chase-combining
(CC) HARQ protocols. Furthermore, since optimized codes for SCD are not optimal
for SC list decoders (SCLD), we propose a rate matching algorithm to find the
best rate for SCLD while using the polar codes optimized for SCD. The resulting
codes provide throughput close to the capacity with low decoding complexity
when used with NC or CC HARQ
Recommended from our members
On a Different Perspective and Approach to Implement Adaptive Normalized BP-based Decoding for LDPC Codes
In this paper, we propose an improved version of the min-sum algorithm for low density parity check (LDPC) code decoding, which we call “adaptive normalized BP-based” algorithm. Our decoder provides a compromise solution between the belief propagation and the min-sum algorithms by adding an exponent offset to each variable node’s intrinsic information in the check node update equation. The extrinsic information from the min-sum decoder is then adjusted by applying a negative power of two scale factor, which can be easily implemented by right shifting the min-sum extrinsic information. The difference between our approach and other adaptive normalized min-sum decoders is that we select the normalization scale factor using a clear analytical approach based on underlying principles. Simulation results show that the proposed decoder outperforms the min-sum decoder and performs very close to the BP decoder, but with lower complexity.Keywords: modified min-sum, belief propagation, sum product, min-sum, LDPC codes, iterative decodin
- …