542 research outputs found

    New Combinatorial Construction Techniques for Low-Density Parity-Check Codes and Systematic Repeat-Accumulate Codes

    Full text link
    This paper presents several new construction techniques for low-density parity-check (LDPC) and systematic repeat-accumulate (RA) codes. Based on specific classes of combinatorial designs, the improved code design focuses on high-rate structured codes with constant column weights 3 and higher. The proposed codes are efficiently encodable and exhibit good structural properties. Experimental results on decoding performance with the sum-product algorithm show that the novel codes offer substantial practical application potential, for instance, in high-speed applications in magnetic recording and optical communications channels.Comment: 10 pages; to appear in "IEEE Transactions on Communications

    Mathematical Programming Decoding of Binary Linear Codes: Theory and Algorithms

    Full text link
    Mathematical programming is a branch of applied mathematics and has recently been used to derive new decoding approaches, challenging established but often heuristic algorithms based on iterative message passing. Concepts from mathematical programming used in the context of decoding include linear, integer, and nonlinear programming, network flows, notions of duality as well as matroid and polyhedral theory. This survey article reviews and categorizes decoding methods based on mathematical programming approaches for binary linear codes over binary-input memoryless symmetric channels.Comment: 17 pages, submitted to the IEEE Transactions on Information Theory. Published July 201

    Structural Design and Analysis of Low-Density Parity-Check Codes and Systematic Repeat-Accumulate Codes

    Get PDF
    The discovery of two fundamental error-correcting code families, known as turbo codes and low-density parity-check (LDPC) codes, has led to a revolution in coding theory and to a paradigm shift from traditional algebraic codes towards modern graph-based codes that can be decoded by iterative message passing algorithms. From then on, it has become a focal point of research to develop powerful LDPC and turbo-like codes. Besides the classical domain of randomly constructed codes, an alternative and competitive line of research is concerned with highly structured LDPC and turbo-like codes based on combinatorial designs. Such codes are typically characterized by high code rates already at small to moderate code lengths and good code properties such as the avoidance of harmful 4-cycles in the code's factor graph. Furthermore, their structure can usually be exploited for an efficient implementation, in particular, they can be encoded with low complexity as opposed to random-like codes. Hence, these codes are suitable for high-speed applications such as magnetic recording or optical communication. This thesis greatly contributes to the field of structured LDPC codes and systematic repeat-accumulate (sRA) codes as a subclass of turbo-like codes by presenting new combinatorial construction techniques and algebraic methods for an improved code design. More specifically, novel and infinite families of high-rate structured LDPC codes and sRA codes are presented based on balanced incomplete block designs (BIBDs), which form a subclass of combinatorial designs. Besides of showing excellent error-correcting capabilites under iterative decoding, these codes can be implemented efficiently, since their inner structure enables low-complexity encoding and accelerated decoding algorithms. A further infinite series of structured LDPC codes is presented based on the notion of transversal designs, which form another subclass of combinatorial designs. By a proper configuration of these codes, they reveal an excellent decoding performance under iterative decoding, in particular, with very low error-floors. The approach for lowering these error-floors is threefold. First, a thorough analysis of the decoding failures is carried out, resulting in an extensive classification of so-called stopping sets and absorbing sets. These combinatorial entities are known to be the main cause of decoding failures in the error-floor region over the binary erasure channel (BEC) and additive white Gaussian noise (AWGN) channel, respectively. Second, the specific code structures are exploited in order to calculate conditions for the avoidance of the most harmful stopping and absorbing sets. Third, powerful design strategies are derived for the identification of those code instances with the best error-floor performances. The resulting codes can additionally be encoded with low complexity and thus are ideally suited for practical high-speed applications. Further investigations are carried out on the infinite family of structured LDPC codes based on finite geometries. It is known that these codes perform very well under iterative decoding and that their encoding can be achieved with low complexity. By combining the latest findings in the fields of finite geometries and combinatorial designs, we generate new theoretical insights about the decoding failures of such codes under iterative decoding. These examinations finally help to identify the geometric codes with the most beneficial error-correcting capabilities over the BEC

    Novel Code-Construction for (3, k) Regular Low Density Parity Check Codes

    Get PDF
    Communication system links that do not have the ability to retransmit generally rely on forward error correction (FEC) techniques that make use of error correcting codes (ECC) to detect and correct errors caused by the noise in the channel. There are several ECC’s in the literature that are used for the purpose. Among them, the low density parity check (LDPC) codes have become quite popular owing to the fact that they exhibit performance that is closest to the Shannon’s limit. This thesis proposes a novel code-construction method for constructing not only (3, k) regular but also irregular LDPC codes. The choice of designing (3, k) regular LDPC codes is made because it has low decoding complexity and has a Hamming distance, at least, 4. In this work, the proposed code-construction consists of information submatrix (Hinf) and an almost lower triangular parity sub-matrix (Hpar). The core design of the proposed code-construction utilizes expanded deterministic base matrices in three stages. Deterministic base matrix of parity part starts with triple diagonal matrix while deterministic base matrix of information part utilizes matrix having all elements of ones. The proposed matrix H is designed to generate various code rates (R) by maintaining the number of rows in matrix H while only changing the number of columns in matrix Hinf. All the codes designed and presented in this thesis are having no rank-deficiency, no pre-processing step of encoding, no singular nature in parity part (Hpar), no girth of 4-cycles and low encoding complexity of the order of (N + g2) where g2«N. The proposed (3, k) regular codes are shown to achieve code performance below 1.44 dB from Shannon limit at bit error rate (BER) of 10 −6 when the code rate greater than R = 0.875. They have comparable BER and block error rate (BLER) performance with other techniques such as (3, k) regular quasi-cyclic (QC) and (3, k) regular random LDPC codes when code rates are at least R = 0.7. In addition, it is also shown that the proposed (3, 42) regular LDPC code performs as close as 0.97 dB from Shannon limit at BER 10 −6 with encoding complexity (1.0225 N), for R = 0.928 and N = 14364 – a result that no other published techniques can reach

    Low-Density Parity-Check Codes From Transversal Designs With Improved Stopping Set Distributions

    Full text link
    This paper examines the construction of low-density parity-check (LDPC) codes from transversal designs based on sets of mutually orthogonal Latin squares (MOLS). By transferring the concept of configurations in combinatorial designs to the level of Latin squares, we thoroughly investigate the occurrence and avoidance of stopping sets for the arising codes. Stopping sets are known to determine the decoding performance over the binary erasure channel and should be avoided for small sizes. Based on large sets of simple-structured MOLS, we derive powerful constraints for the choice of suitable subsets, leading to improved stopping set distributions for the corresponding codes. We focus on LDPC codes with column weight 4, but the results are also applicable for the construction of codes with higher column weights. Finally, we show that a subclass of the presented codes has quasi-cyclic structure which allows low-complexity encoding.Comment: 11 pages; to appear in "IEEE Transactions on Communications

    Network flow algorithms for wireless networks and design and analysis of rate compatible LDPC codes

    Get PDF
    While Shannon already characterized the capacity of point-to-point channels back in 1948, characterizing the capacity of wireless networks has been a challenging problem. The deterministic channel model proposed by Avestimehr, etc. (2007 - 1) has been a promising approach for approximating the Gaussian channel capacity and has been widely studied recently. Motivated by this model, an improved combinatorial algorithm is considered for finding the unicast capacity for wireless information flow on such deterministic networks in the first part of this thesis. Our algorithm fully explores the useful combinatorial features intrinsic in the problem. Our improvement applies generally with any size of finite fields associated with the channel model. Comparing with other related algorithms, our improved algorithm has very competitive performance in complexity. In the second part of our work, we consider the design and analysis of rate-compatible LDPC codes. Rate-compatible LDPC codes are basically a family of nested codes, operating at different code rates and all of them can be encoded and decoded using a single encoder and decoder pair. Those properties make rate-compatible LDPC codes a good choice for changing channel conditions, like in wireless communications. The previous work on the design and analysis of LDPC codes are all targeting at a specific code rate and no work is known on the design and analysis of rate-compatible LDPC codes so that the code performance at all code rates in the family is manageable and predictable. In our work, we proposed algorithms for the design and analysis of rate-compatible LDPC codes with good performance and make the code performance at all code rates manageable and predictable. Our work is based on E2RC codes, while our approaches in the design and analysis can be applied more generally not only to E2RC codes, but to other suitable scenarios, like the design of IRA codes. Most encouragingly, we obtain families of rate-compatible codes whose gaps to capacity are at most 0.3 dB across the range of rates when the maximum variable node degree is twenty, which is very promising compared with other existing results

    On the resolutions of cyclic Steiner triple systems with small parameters

    Get PDF
    The paper presents useful invariants of resolutions of cyclic STS(v)STS(v) with v≤39v\le 39, namely of all resolutions of cyclic STS(15)STS(15), STS(21)STS(21) and STS(27)STS(27), of the resolutions with nontrivial automorphisms of cyclic STS(33)STS(33) and of resolutions with automorphisms of order 1313 of cyclic STS(39)STS(39)
    • …
    corecore