31 research outputs found

    Design and Analysis of Time-Invariant SC-LDPC Convolutional Codes With Small Constraint Length

    Full text link
    In this paper, we deal with time-invariant spatially coupled low-density parity-check convolutional codes (SC-LDPC-CCs). Classic design approaches usually start from quasi-cyclic low-density parity-check (QC-LDPC) block codes and exploit suitable unwrapping procedures to obtain SC-LDPC-CCs. We show that the direct design of the SC-LDPC-CCs syndrome former matrix or, equivalently, the symbolic parity-check matrix, leads to codes with smaller syndrome former constraint lengths with respect to the best solutions available in the literature. We provide theoretical lower bounds on the syndrome former constraint length for the most relevant families of SC-LDPC-CCs, under constraints on the minimum length of cycles in their Tanner graphs. We also propose new code design techniques that approach or achieve such theoretical limits.Comment: 30 pages, 5 figures, accepted for publication in IEEE Transactions on Communication

    Compact QC-LDPC Block and SC-LDPC Convolutional Codes for Low-Latency Communications

    Full text link
    Low decoding latency and complexity are two important requirements of channel codes used in many applications, like machine-to-machine communications. In this paper, we show how these requirements can be fulfilled by using some special quasi-cyclic low-density parity-check block codes and spatially coupled low-density parity-check convolutional codes that we denote as compact. They are defined by parity-check matrices designed according to a recent approach based on sequentially multiplied columns. This method allows obtaining codes with girth up to 12. Many numerical examples of practical codes are provided.Comment: 5 pages, 1 figure, presented at IEEE PIMRC 201

    Efficient Search of Compact QC-LDPC and SC-LDPC Convolutional Codes with Large Girth

    Full text link
    We propose a low-complexity method to find quasi-cyclic low-density parity-check block codes with girth 10 or 12 and shorter length than those designed through classical approaches. The method is extended to time-invariant spatially coupled low-density parity-check convolutional codes, permitting to achieve small syndrome former constraint lengths. Several numerical examples are given to show its effectiveness.Comment: 4 pages, 3 figures, 1 table, accepted for publication in IEEE Communications Letter

    On generalized LDPC codes for ultra reliable communication

    Get PDF
    Ultra reliable low latency communication (URLLC) is an important feature in future mobile communication systems, as they will require high data rates, large system capacity and massive device connectivity [11]. To meet such stringent requirements, many error-correction codes (ECC)s are being investigated; turbo codes, low density parity check (LDPC) codes, polar codes and convolutional codes [70, 92, 38], among many others. In this work, we present generalized low density parity check (GLDPC) codes as a promising candidate for URLLC. Our proposal is based on a novel class of GLDPC code ensembles, for which new analysis tools are proposed. We analyze the trade-o_ between coding rate and asymptotic performance of a class of GLDPC codes constructed by including a certain fraction of generalized constraint (GC) nodes in the graph. To incorporate both bounded distance (BD) and maximum likelihood (ML) decoding at GC nodes into our analysis without resorting to multi-edge type of degree distribution (DD)s, we propose the probabilistic peeling decoding (P-PD) algorithm, which models the decoding step at every GC node as an instance of a Bernoulli random variable with a successful decoding probability that depends on both the GC block code as well as its decoding algorithm. The P-PD asymptotic performance over the BEC can be efficiently predicted using standard techniques for LDPC codes such as Density evolution (DE) or the differential equation method. We demonstrate that the simulated P-PD performance accurately predicts the actual performance of the GLPDC code under ML decoding at GC nodes. We illustrate our analysis for GLDPC code ensembles with regular and irregular DDs. This design methodology is applied to construct practical codes for URLLC. To this end, we incorporate to our analysis the use of quasi-cyclic (QC) structures, to mitigate the code error floor and facilitate the code very large scale integration (VLSI) implementation. Furthermore, for the additive white Gaussian noise (AWGN) channel, we analyze the complexity and performance of the message passing decoder with various update rules (including standard full-precision sum product and min-sum algorithms) and quantization schemes. The block error rate (BLER) performance of the proposed GLDPC codes, combined with a complementary outer code, is shown to outperform a variety of state-of-the-art codes, for URLLC, including LDPC codes, polar codes, turbo codes and convolutional codes, at similar complexity rates.Programa Oficial de Doctorado en Multimedia y ComunicacionesPresidente: Juan José Murillo Fuentes.- Secretario: Matilde Pilar Sánchez Fernández.- Vocal: Javier Valls Coquilla

    Design and Analysis of Graph-based Codes Using Algebraic Lifts and Decoding Networks

    Get PDF
    Error-correcting codes seek to address the problem of transmitting information efficiently and reliably across noisy channels. Among the most competitive codes developed in the last 70 years are low-density parity-check (LDPC) codes, a class of codes whose structure may be represented by sparse bipartite graphs. In addition to having the potential to be capacity-approaching, LDPC codes offer the significant practical advantage of low-complexity graph-based decoding algorithms. Graphical substructures called trapping sets, absorbing sets, and stopping sets characterize failure of these algorithms at high signal-to-noise ratios. This dissertation focuses on code design for and analysis of iterative graph-based message-passing decoders. The main contributions of this work include the following: the unification of spatially-coupled LDPC (SC-LDPC) code constructions under a single algebraic graph lift framework and the analysis of SC-LDPC code construction techniques from the perspective of removing harmful trapping and absorbing sets; analysis of the stopping and absorbing set parameters of hypergraph codes and finite geometry LDPC (FG-LDPC) codes; the introduction of multidimensional decoding networks that encode the behavior of hard-decision message-passing decoders; and the presentation of a novel Iteration Search Algorithm, a list decoder designed to improve the performance of hard-decision decoders. Adviser: Christine A. Kelle

    New Identification and Decoding Techniques for Low-Density Parity-Check Codes

    Get PDF
    Error-correction coding schemes are indispensable for high-capacity high data-rate communication systems nowadays. Among various channel coding schemes, low-density parity-check (LDPC) codes introduced by pioneer Robert G. Gallager are prominent due to the capacity-approaching and superior error-correcting properties. There is no hard constraint on the code rate of LDPC codes. Consequently, it is ideal to incorporate LDPC codes with various code rate and codeword length in the adaptive modulation and coding (AMC) systems which change the encoder and the modulator adaptively to improve the system throughput. In conventional AMC systems, a dedicated control channel is assigned to coordinate the encoder/decoder changes. A questions then rises: if the AMC system still works when such a control channel is absent. This work gives positive answer to this question by investigating various scenarios consisting of different modulation schemes, such as quadrature-amplitude modulation (QAM), frequency-shift keying (FSK), and different channels, such as additive white Gaussian noise (AWGN) channels and fading channels. On the other hand, LDPC decoding is usually carried out by iterative belief-propagation (BP) algorithms. As LDPC codes become prevalent in advanced communication and storage systems, low-complexity LDPC decoding algorithms are favored in practical applications. In the conventional BP decoding algorithm, the stopping criterion is to check if all the parities are satisfied. This single rule may not be able to identify the undecodable blocks, as a result, the decoding time and power consumption are wasted for executing unnecessary iterations. In this work, we propose a new stopping criterion to identify the undecodable blocks in the early stage of the iterative decoding process. Furthermore, in the conventional BP decoding algorithm, the variable (check) nodes are updated in parallel. It is known that the number of iterations can be reduced by the serial scheduling algorithm. The informed dynamic scheduling (IDS) algorithms were proposed in the existing literatures to further reduce the number of iterations. However, the computational complexity involved in finding the update node in the existing IDS algorithms would not be neglected. In this work, we propose a new efficient IDS scheme which can provide better performance-complexity trade-off compared to the existing IDS ones. In addition, the iterative decoding threshold, which is used for differentiating which LDPC code is better, is investigated in this work. A family of LDPC codes, called LDPC convolutional codes, has drawn a lot of attentions from researchers in recent years due to the threshold saturation phenomenon. The IDT for an LDPC convolutional code may be computationally demanding when the termination length goes to thousand or even approaches infinity, especially for AWGN channels. In this work, we propose a fast IDT estimation algorithm which can greatly reduce the complexity of the IDT calculation for LDPC convolutional codes with arbitrary large termination length (including infinity). By utilizing our new IDT estimation algorithm, the IDTs for LDPC convolutional codes with arbitrary large termination length (including infinity) can be quickly obtained
    corecore