82 research outputs found

    Multilevel Decoders Surpassing Belief Propagation on the Binary Symmetric Channel

    Full text link
    In this paper, we propose a new class of quantized message-passing decoders for LDPC codes over the BSC. The messages take values (or levels) from a finite set. The update rules do not mimic belief propagation but instead are derived using the knowledge of trapping sets. We show that the update rules can be derived to correct certain error patterns that are uncorrectable by algorithms such as BP and min-sum. In some cases even with a small message set, these decoders can guarantee correction of a higher number of errors than BP and min-sum. We provide particularly good 3-bit decoders for 3-left-regular LDPC codes. They significantly outperform the BP and min-sum decoders, but more importantly, they achieve this at only a fraction of the complexity of the BP and min-sum decoders.Comment: 5 pages, in Proc. of 2010 IEEE International Symposium on Information Theory (ISIT

    Decoding LDPC Codes with Probabilistic Local Maximum Likelihood Bit Flipping

    Get PDF
    Communication channels are inherently noisy making error correction coding a major topic of research for modern communication systems. Error correction coding is the addition of redundancy to information transmitted over communication channels to enable detection and recovery of erroneous information. Low-density parity-check (LDPC) codes are a class of error correcting codes that have been effective in maintaining reliability of information transmitted over communication channels. Multiple algorithms have been developed to benefit from the LDPC coding scheme to improve recovery of erroneous information. This work develops a matrix construction that stores the information error probability statistics for a communication channel. This combined with the error correcting capability of LDPC codes enabled the development of the Probabilistic Local Maximum Likelihood Bit Flipping (PLMLBF) algorithm, which is the focus of this research work

    Error-Floors of the 802.3an LDPC Code for Noise Assisted Decoding

    Get PDF
    In digital communication, information is sent as bits, which is corrupted by the noise present in wired/wireless medium known as the channel. The Low Density Parity Check (LDPC) codes are a family of error correction codes used in communication systems to detect and correct erroneous data at the receiver. Data is encoded with error correction coding at the transmitter and decoded at the receiver. The Noisy Gradient Descent BitFlip (NGDBF) decoding algorithm is a new algorithm with excellent decoding performance with relatively low implementation requirements. This dissertation aims to characterize the performance of the NGDBF algorithm. A simple improvement over NGDBF called the Re-decoded NGDBF (R-NGDBF) is proposed to enhance the performance of NGDBF decoding algorithm. A general method to estimate the decoding parameters of NGDBF is presented. The estimated parameters are then verified in a hardware implementation of the decoder to validate the accuracy of the estimation technique

    On performance analysis and implementation issues of iterative decoding for graph based codes

    Get PDF
    There is no doubt that long random-like code has the potential to achieve good performance because of its excellent distance spectrum. However, these codes remain useless in practical applications due to the lack of decoders rendering good performance at an acceptable complexity. The invention of turbo code marks a milestone progress in channel coding theory in that it achieves near Shannon limit performance by using an elegant iterative decoding algorithm. This great success stimulated intensive research oil long compound codes sharing the same decoding mechanism. Among these long codes are low-density parity-check (LDPC) code and product code, which render brilliant performance. In this work, iterative decoding algorithms for LDPC code and product code are studied in the context of belief propagation. A large part of this work concerns LDPC code. First the concept of iterative decoding capacity is established in the context of density evolution. Two simulation-based methods approximating decoding capacity are applied to LDPC code. Their effectiveness is evaluated. A suboptimal iterative decoder, Max-Log-MAP algorithm, is also investigated. It has been intensively studied in turbo code but seems to be neglected in LDPC code. The specific density evolution procedure for Max-Log-MAP decoding is developed. The performance of LDPC code with infinite block length is well-predicted using density evolution procedure. Two implementation issues on iterative decoding of LDPC code are studied. One is the design of a quantized decoder. The other is the influence of mismatched signal-to-noise ratio (SNR) level on decoding performance. The theoretical capacities of the quantized LDPC decoder, under Log-MAP and Max-Log-MAP algorithms, are derived through discretized density evolution. It is indicated that the key point in designing a quantized decoder is to pick a proper dynamic range. Quantization loss in terms of bit error rate (BER) performance could be kept remarkably low, provided that the dynamic range is chosen wisely. The decoding capacity under fixed SNR offset is obtained. The robustness of LDPC code with practical length is evaluated through simulations. It is found that the amount of SNR offset that can be tolerated depends on the code length. The remaining part of this dissertation deals with iterative decoding of product code. Two issues on iterative decoding of\u27 product code are investigated. One is, \u27improving BER performance by mitigating cycle effects. The other is, parallel decoding structure, which is conceptually better than serial decoding and yields lower decoding latency

    Improve the Usability of Polar Codes: Code Construction, Performance Enhancement and Configurable Hardware

    Full text link
    Error-correcting codes (ECC) have been widely used for forward error correction (FEC) in modern communication systems to dramatically reduce the signal-to-noise ratio (SNR) needed to achieve a given bit error rate (BER). Newly invented polar codes have attracted much interest because of their capacity-achieving potential, efficient encoder and decoder implementation, and flexible architecture design space.This dissertation is aimed at improving the usability of polar codes by providing a practical code design method, new approaches to improve the performance of polar code, and a configurable hardware design that adapts to various specifications. State-of-the-art polar codes are used to achieve extremely low error rates. In this work, high-performance FPGA is used in prototyping polar decoders to catch rare-case errors for error-correcting performance verification and error analysis. To discover the polarization characteristics and error patterns of polar codes, an FPGA emulation platform for belief-propagation (BP) decoding is built by a semi-automated construction flow. The FPGA-based emulation achieves significant speedup in large-scale experiments involving trillions of data frames. The platform is a key enabler of this work. The frozen set selection of polar codes, known as bit selection, is critical to the error-correcting performance of polar codes. A simulation-based in-order bit selection method is developed to evaluate the error rate of each bit using Monte Carlo simulations. The frozen set is selected based on the bit reliability ranking. The resulting code construction exhibits up to 1 dB coding gain with respect to the conventional bit selection. To further improve the coding gain of BP decoder for low-error-rate applications, the decoding error mechanisms are studied and analyzed, and the errors are classified based on their distinct signatures. Error detection is enabled by low-cost CRC concatenation, and post-processing algorithms targeting at each type of the error is designed to mitigate the vast majority of the decoding errors. The post-processor incurs only a small implementation overhead, but it provides more than an order of magnitude improvement of the error-correcting performance. The regularity of the BP decoder structure offers many hardware architecture choices. Silicon area, power consumption, throughput and latency can be traded to reach the optimal design points for practical use cases. A comprehensive design space exploration reveals several practical architectures at different design points. The scalability of each architecture is also evaluated based on the implementation candidates. For dynamic communication channels, such as wireless channels in the upcoming 5G applications, multiple codes of different lengths and code rates are needed to t varying channel conditions. To minimize implementation cost, a universal decoder architecture is proposed to support multiple codes through hardware reuse. A 40nm length- and rate-configurable polar decoder ASIC is demonstrated to fit various communication environments and service requirements.PHDElectrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/140817/1/shuangsh_1.pd

    On the Convergence Speed of Turbo Demodulation with Turbo Decoding

    Full text link
    Iterative processing is widely adopted nowadays in modern wireless receivers for advanced channel codes like turbo and LDPC codes. Extension of this principle with an additional iterative feedback loop to the demapping function has proven to provide substantial error performance gain. However, the adoption of iterative demodulation with turbo decoding is constrained by the additional implied implementation complexity, heavily impacting latency and power consumption. In this paper, we analyze the convergence speed of these combined two iterative processes in order to determine the exact required number of iterations at each level. Extrinsic information transfer (EXIT) charts are used for a thorough analysis at different modulation orders and code rates. An original iteration scheduling is proposed reducing two demapping iterations with reasonable performance loss of less than 0.15 dB. Analyzing and normalizing the computational and memory access complexity, which directly impact latency and power consumption, demonstrates the considerable gains of the proposed scheduling and the promising contributions of the proposed analysis.Comment: Submitted to IEEE Transactions on Signal Processing on April 27, 201

    비신뢰 경로 검색 기법을 이용한 저밀도 패리티 체크 부호를 위한 저복잡도 복호 기법 연구

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 공과대학 전기·컴퓨터공학부, 2017. 8. 노종선.This dissertation contains the following contributions on the low-complexity decoding schemes of LDPC codes. Two-stage decoding scheme for LDPC codes – A new stopping criterion for LDPC codes – A new decoding scheme for LDPC codes with unreliable path search Parallel unreliable path search algorithm Analysis of two-stage decoding schemes – Validity and complexity analysis First, a new two-stage decoding scheme for low-density parity check (LDPC) codes to lower the error-floor is proposed. The proposed decoding scheme consists of the conventional belief propagation (BP) decoding algorithm as the first-stage decoding and the re-decodings with manipulated log-likelihood ratios (LLRs) of variable nodes as the second-stage decoding. In the first-stage decoding, an early stopping criterion is proposed for early detection of decoding failure and the candidate set of the variable nodes is determined, which can be partly included in the small trapping sets. In the second-stage decoding, the scores of the variable nodes in the candidate set are computed by the proposed unreliable path search algorithm and the variable nodes are sorted in ascending order by their scores for the re-decoding trials. Each re-decoding trial is performed by BP decoding algorithm with manipulated LLR of a selected variable node in the candidate set one at a time with the second early stopping criterion. Secondly, the parallel unreliable path search algorithm is proposed for practical application to the proposed unreliable path search algorithm. In order to reduce the decoding delay and computational complexity, an efficient method for the search algorithm based on the parallel message-passing algorithm in the LDPC decoding is proposed. The parallel unreliable path search algorithm significantly reduces the additional complexity without extra hardware requirements. Finally, the validity and the complexity analysis of the proposed unreliable path search algorithm is presented. The proposed algorithm effectively finds the variable nodes in small trapping sets much more faster than the previous random selection method. Also, it is verified that the additional complexity of the parallel unreliable path search algorithm is less than that of one iteration of iterative decoders.Abstract i Contents iii List of Tables v List of Figures vi 1 Introduction 1 1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Overview of Dissertation . . . . . . . . . . . . . . . . . . . . . . . . 6 2 Overview of LDPC Codes 9 2.1 Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2 Decoding of LDPC Codes . . . . . . . . . . . . . . . . . . . . . . . 11 2.3 Analysis of LDPC Codes . . . . . . . . . . . . . . . . . . . . . . . . 15 2.3.1 Density Evolution . . . . . . . . . . . . . . . . . . . . . . . 15 2.3.2 Mean Evolution . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.4 Quasi-Cyclic LDPC Codes . . . . . . . . . . . . . . . . . . . . . . . 19 2.5 Error-Floor and Trapping Sets . . . . . . . . . . . . . . . . . . . . . 21 3 A New Two-Stage Decoding Scheme with Unreliable Path Search 23 3.1 Overview of The Proposed Two-Stage Decoding Scheme . . . . . . . 26 3.2 First-Stage Decoding with the First Early Stopping Criterion . . . . . 27 3.3 Second-Stage Decoding with Unreliable Path Search Algorithm . . . 36 3.3.1 Scoring by Unreliable Path Search Algorithm . . . . . . . . . 37 3.3.2 LLR Manipulation and Re-decoding with the Second Early Stopping Criterion . . . . . . . . . . . . . . . . . . . . . . . 42 4 Parallel Unreliable Path Search Algorithm 44 4.1 Description of Parallel Unreliable Path Search Algorithm . . . . . . . 44 4.2 Scoring by Parallel Unreliable Path Search Algorithm . . . . . . . . . 48 5 Analysis of the Unreliable Path Search Algorithm 51 5.1 Validity of the Unreliable Path Search Algorithm . . . . . . . . . . . 51 5.2 Complexity Analysis of the Unreliable Path Search Algorithm . . . . 56 6 Simulation Results 59 7 Conclusions 65 Abstract (In Korean) 73Docto
    corecore