3,698 research outputs found

    Statistical physics of low density parity check error correcting codes

    Get PDF
    We study the performance of Low Density Parity Check (LDPC) error-correcting codes using the methods of statistical physics. LDPC codes are based on the generation of codewords using Boolean sums of the original message bits by employing two randomly-constructed sparse matrices. These codes can be mapped onto Ising spin models and studied using common methods of statistical physics. We examine various regular constructions and obtain insight into their theoretical and practical limitations. We also briefly report on results obtained for irregular code constructions, for codes with non-binary alphabet, and on how a finite system size effects the error probability

    The Statistical Physics of Regular Low-Density Parity-Check Error-Correcting Codes

    Full text link
    A variation of Gallager error-correcting codes is investigated using statistical mechanics. In codes of this type, a given message is encoded into a codeword which comprises Boolean sums of message bits selected by two randomly constructed sparse matrices. The similarity of these codes to Ising spin systems with random interaction makes it possible to assess their typical performance by analytical methods developed in the study of disordered systems. The typical case solutions obtained via the replica method are consistent with those obtained in simulations using belief propagation (BP) decoding. We discuss the practical implications of the results obtained and suggest a computationally efficient construction for one of the more practical configurations.Comment: 35 pages, 4 figure

    Average and reliability error exponents in low-density parity-check codes

    Get PDF
    We present a theoretical method for a direct evaluation of the average and reliability error exponents in low-density parity-check error-correcting codes using methods of statistical physics. Results for the binary symmetric channel are presented for codes of both finite and infinite connectivity

    Statistical Physics of Irregular Low-Density Parity-Check Codes

    Get PDF
    Low-density parity-check codes with irregular constructions have been recently shown to outperform the most advanced error-correcting codes to date. In this paper we apply methods of statistical physics to study the typical properties of simple irregular codes. We use the replica method to find a phase transition which coincides with Shannon's coding bound when appropriate parameters are chosen. The decoding by belief propagation is also studied using statistical physics arguments; the theoretical solutions obtained are in good agreement with simulations. We compare the performance of irregular with that of regular codes and discuss the factors that contribute to the improvement in performance.Comment: 20 pages, 9 figures, revised version submitted to JP

    Statistical Mechanics of Low-Density Parity Check Error-Correcting Codes over Galois Fields

    Get PDF
    A variation of low density parity check (LDPC) error correcting codes defined over Galois fields (GF(q)GF(q)) is investigated using statistical physics. A code of this type is characterised by a sparse random parity check matrix composed of CC nonzero elements per column. We examine the dependence of the code performance on the value of qq, for finite and infinite CC values, both in terms of the thermodynamical transition point and the practical decoding phase characterised by the existence of a unique (ferromagnetic) solution. We find different qq-dependencies in the cases of C=2 and C≥3C \ge 3; the analytical solutions are in agreement with simulation results, providing a quantitative measure to the improvement in performance obtained using non-binary alphabets.Comment: 7 pages, 1 figur

    Critical Noise Levels for LDPC decoding

    Get PDF
    We determine the critical noise level for decoding low density parity check error correcting codes based on the magnetization enumerator (\cM), rather than on the weight enumerator (\cW) employed in the information theory literature. The interpretation of our method is appealingly simple, and the relation between the different decoding schemes such as typical pairs decoding, MAP, and finite temperature decoding (MPM) becomes clear. In addition, our analysis provides an explanation for the difference in performance between MN and Gallager codes. Our results are more optimistic than those derived via the methods of information theory and are in excellent agreement with recent results from another statistical physics approach.Comment: 9 pages, 5 figure

    Message passing algorithms for non-linear nodes and data compression

    Full text link
    The use of parity-check gates in information theory has proved to be very efficient. In particular, error correcting codes based on parity checks over low-density graphs show excellent performances. Another basic issue of information theory, namely data compression, can be addressed in a similar way by a kind of dual approach. The theoretical performance of such a Parity Source Coder can attain the optimal limit predicted by the general rate-distortion theory. However, in order to turn this approach into an efficient compression code (with fast encoding/decoding algorithms) one must depart from parity checks and use some general random gates. By taking advantage of analytical approaches from the statistical physics of disordered systems and SP-like message passing algorithms, we construct a compressor based on low-density non-linear gates with a very good theoretical and practical performance.Comment: 13 pages, European Conference on Complex Systems, Paris (Nov 2005

    Low density parity check codes: a statistical physics perspective

    Get PDF
    The modem digital communication systems are made transmission reliable by employing error correction technique for the redundancies. Codes in the low-density parity-check work along the principles of Hamming code, and the parity-check matrix is very sparse, and multiple errors can be corrected. The sparseness of the matrix allows for the decoding process to be carried out by probability propagation methods similar to those employed in Turbo codes. The relation between spin systems in statistical physics and digital error correcting codes is based on the existence of a simple isomorphism between the additive Boolean group and the multiplicative binary group. Shannon proved general results on the natural limits of compression and error-correction by setting up the framework known as information theory. Error-correction codes are based on mapping the original space of words onto a higher dimensional space in such a way that the typical distance between encoded words increases

    Statistical Mechanics of Broadcast Channels Using Low Density Parity Check Codes

    Get PDF
    We investigate the use of Gallager's low-density parity-check (LDPC) codes in a broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple timesharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based timesharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the timesharing limit.Comment: 14 pages, 4 figure

    Statistical Mechanics of Broadcast Channels Using Low Density Parity Check Codes

    Get PDF
    We investigate the use of Gallager's low-density parity-check (LDPC) codes in a broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple timesharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based timesharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the timesharing limit.Comment: 14 pages, 4 figure
    • …
    corecore