5,351 research outputs found

    A Fast Convergence Density Evolution Algorithm for Optimal Rate LDPC Codes in BEC

    Full text link
    We derive a new fast convergent Density Evolution algorithm for finding optimal rate Low-Density Parity-Check (LDPC) codes used over the binary erasure channel (BEC). The fast convergence property comes from the modified Density Evolution (DE), a numerical method for analyzing the behavior of iterative decoding convergence of a LDPC code. We have used the method of [16] for designing of a LDPC code with optimal rate. This has been done for a given parity check node degree distribution, erasure probability and specified DE constraint. The fast behavior of DE and found optimal rate with this method compare with the previous DE constraint.Comment: This Paper is a draft of final paper which represented in 7th International Symposium on Telecommunications (IST'2014

    Fourier Domain Decoding Algorithm of Non-Binary LDPC codes for Parallel Implementation

    Full text link
    For decoding non-binary low-density parity check (LDPC) codes, logarithm-domain sum-product (Log-SP) algorithms were proposed for reducing quantization effects of SP algorithm in conjunction with FFT. Since FFT is not applicable in the logarithm domain, the computations required at check nodes in the Log-SP algorithms are computationally intensive. What is worth, check nodes usually have higher degree than variable nodes. As a result, most of the time for decoding is used for check node computations, which leads to a bottleneck effect. In this paper, we propose a Log-SP algorithm in the Fourier domain. With this algorithm, the role of variable nodes and check nodes are switched. The intensive computations are spread over lower-degree variable nodes, which can be efficiently calculated in parallel. Furthermore, we develop a fast calculation method for the estimated bits and syndromes in the Fourier domain.Comment: To appear in IEICE Trans. Fundamentals, vol.E93-A, no.11 November 201

    Efficient implementation of linear programming decoding

    Full text link
    While linear programming (LP) decoding provides more flexibility for finite-length performance analysis than iterative message-passing (IMP) decoding, it is computationally more complex to implement in its original form, due to both the large size of the relaxed LP problem, and the inefficiency of using general-purpose LP solvers. This paper explores ideas for fast LP decoding of low-density parity-check (LDPC) codes. We first prove, by modifying the previously reported Adaptive LP decoding scheme to allow removal of unnecessary constraints, that LP decoding can be performed by solving a number of LP problems that contain at most one linear constraint derived from each of the parity-check constraints. By exploiting this property, we study a sparse interior-point implementation for solving this sequence of linear programs. Since the most complex part of each iteration of the interior-point algorithm is the solution of a (usually ill-conditioned) system of linear equations for finding the step direction, we propose a preconditioning algorithm to facilitate iterative solution of such systems. The proposed preconditioning algorithm is similar to the encoding procedure of LDPC codes, and we demonstrate its effectiveness via both analytical methods and computer simulation results.Comment: 44 pages, submitted to IEEE Transactions on Information Theory, Dec. 200

    Optimizing the Bit-flipping Method for Decoding Low-density Parity-check Codes in Wireless Networks by Using the Artificial Spider Algorithm

    Get PDF
    In this paper, the performance of Low-Density Parity-Check (LDPC) codes is improved, which leads to reduce the complexity of hard-decision Bit-Flipping (BF) decoding by utilizing the Artificial Spider Algorithm (ASA). The ASA is used to solve the optimization problem of decoding thresholds. Two decoding thresholds are used to flip multiple bits in each round of iteration to reduce the probability of errors and accelerate decoding convergence speed while improving decoding performance. These errors occur every time the bits are flipped. Then, the BF algorithm with a low-complexity optimizer only requires real number operations before iteration and logical operations in each iteration. The ASA is better than the optimized decoding scheme that uses the Particle Swarm Optimization (PSO) algorithm. The proposed scheme can improve the performance of wireless network applications with good proficiency and results. Simulation results show that the ASA-based algorithm for solving highly nonlinear unconstrained problems exhibits fast decoding convergence speed and excellent decoding performance. Thus, it is suitable for applications in broadband wireless networks

    Convergence Analysis of Iterative Threshold Decoding Process

    Get PDF
    Abstract Today the error correcting codes are present in all the telecom standards, in particular the low density parity check (LDPC) codes. The choice of a good code for a given network is essentially linked to the decoding performance obtained by the bit error rate (BER) curves. This approach requires a significant simulation time proportional to the length of the code, to overcome this problem Exit chart was introduced, as a fast technique to predict the performance of a particular class of codes called Turbo codes. In this paper, we success to apply Exit chart to analyze convergence behavior of iterative threshold decoding of one step majority logic decodable (OSMLD) codes. The iterative decoding process uses a soft-input soft-output threshold decoding algorithm as component decoder. Simulation results for iterative decoding of simple and concatenated codes transmitted over a Gaussian channel have shown that the thresholds obtained are a good indicator of the Bit Error Rate (BER) curves

    New Algorithms for High-Throughput Decoding with Low-Density Parity-Check Codes using Fixed-Point SIMD Processors

    Get PDF
    Most digital signal processors contain one or more functional units with a single-instruction, multiple-data architecture that supports saturating fixed-point arithmetic with two or more options for the arithmetic precision. The processors designed for the highest performance contain many such functional units connected through an on-chip network. The selection of the arithmetic precision provides a trade-off between the task-level throughput and the quality of the output of many signal-processing algorithms, and utilization of the interconnection network during execution of the algorithm introduces a latency that can also limit the algorithm\u27s throughput. In this dissertation, we consider the turbo-decoding message-passing algorithm for iterative decoding of low-density parity-check codes and investigate its performance in parallel execution on a processor of interconnected functional units employing fast, low-precision fixed-point arithmetic. It is shown that the frequent occurrence of saturation when 8-bit signed arithmetic is used severely degrades the performance of the algorithm compared with decoding using higher-precision arithmetic. A technique of limiting the magnitude of certain intermediate variables of the algorithm, the extrinsic values, is proposed and shown to eliminate most occurrences of saturation, resulting in performance with 8-bit decoding nearly equal to that achieved with higher-precision decoding. We show that the interconnection latency can have a significant detrimental effect of the throughput of the turbo-decoding message-passing algorithm, which is illustrated for a type of high-performance digital signal processor known as a stream processor. Two alternatives to the standard schedule of message-passing and parity-check operations are proposed for the algorithm. Both alternatives markedly reduce the interconnection latency, and both result in substantially greater throughput than the standard schedule with no increase in the probability of error

    Single-Scan Min-Sum Algorithms for Fast Decoding of LDPC Codes

    Full text link
    Many implementations for decoding LDPC codes are based on the (normalized/offset) min-sum algorithm due to its satisfactory performance and simplicity in operations. Usually, each iteration of the min-sum algorithm contains two scans, the horizontal scan and the vertical scan. This paper presents a single-scan version of the min-sum algorithm to speed up the decoding process. It can also reduce memory usage or wiring because it only needs the addressing from check nodes to variable nodes while the original min-sum algorithm requires that addressing plus the addressing from variable nodes to check nodes. To cut down memory usage or wiring further, another version of the single-scan min-sum algorithm is presented where the messages of the algorithm are represented by single bit values instead of using fixed point ones. The software implementation has shown that the single-scan min-sum algorithm is more than twice as fast as the original min-sum algorithm.Comment: Accepted by IEEE Information Theory Workshop, Chengdu, China, 200
    • …
    corecore