13 research outputs found

    GLDPC-Staircase AL-FEC codes: A Fundamental study and New results

    Get PDF
    International audienceThis paper provides fundamentals in the design and analysis of Generalized Low Density Parity Check (GLDPC)-Staircase codes over the erasure channel. These codes are constructed by extending an LDPC-Staircase code (base code) using Reed Solomon (RS) codes (outer codes) in order to benefit from more powerful decoders. The GLDPC-Staircase coding scheme adds, in addition to the LDPC-Staircase repair symbols, extra-repair symbols that can be produced on demand and in large quantities, which provides small rate capabilities. Therefore, these codes are extremely flexible as they can be tuned to behave either like predefined rate LDPC-Staircase codes at one extreme, or like a single RS code at another extreme, or like small rate codes. Concerning the code design, we show that RS codes with " quasi " Hankel matrix-based construction fulfill the desired structure properties, and that a hybrid (IT/RS/ML) decoding is feasible that achieves Maximum Likelihood (ML) correction capabilities at a lower complexity. Concerning performance analysis, we detail an asymptotic analysis method based on Density evolution (DE), EXtrinsic Information Transfer (EXIT) and the area theorem. Based on several asymptotic and finite length results, after selecting the optimal internal parameters, we demonstrate that GLDPC-Staircase codes feature excellent erasure recovery capabilities, close to that of ideal codes, both with large and very small objects. From this point of view they outperform LDPC-Staircase and Raptor codes, and achieve correction capabilities close to those of RaptorQ codes. Therefore all these results make GLDPC-Staircase codes a universal Application-Layer FEC (AL-FEC) solution for many situations that require erasure protection such as media streaming or file multicast transmission

    Low-rate coding using incremental redundancy for GLDPC codes

    Get PDF
    In this paper we propose a low-rate coding method, suited for application-layer forward error correction. Depending on channel conditions, the coding scheme we propose can switch from a fixed-rate LDPC code to various low-rate GLDPC codes. The source symbols are first encoded by using a staircase or triangular LDPC code. If additional symbols are needed, the encoder is then switched to the GLDPC mode and extra-repair symbols are produced, on demand. In order to ensure small overheads, we consider irregular distributions of extra-repair symbols optimized by density evolution techniques. We also show that increasing the number of extra-repair symbols improves the successful decoding probability, which becomes very close to 1 for sufficiently many extra-repair symbols

    Design of Small Rate, Close to Ideal, GLDPC-Staircase AL-FEC Codes for the Erasure Channel

    Get PDF
    International audienceThis work introduces the Generalized Low Density Parity Check (GLDPC)-Staircase codes for the erasure channel, that are constructed by extending LDPC-Staircase codes through Reed Solomon (RS) codes based on "quasi" Hankel matrices. This construction has several key benefits: in addition to the LDPC-Staircase repair symbols, it adds extra-repair symbols that can be produced on demand and in large quantities, which provides small rate capabilities. Additionally, with selecting the best internal parameters of GLDPC graph and under hy- brid Iterative/Reed-Solomon/Maximum Likelihood decoding, the GLDPC-Staircase codes feature a very small decoding overhead and a low error floor. These excellent erasure capabilities, close to that of ideal, MDS codes, are obtained both with large and very small objects, whereas, as a matter of comparison, LDPC codes are known to be asymptotically good. Therefore, these properties make GLDPC-Staircase codes an excellent AL-FEC solution for many situations that require erasure protection such as media streaming

    Low-rate coding using incremental redundancy for GLDPC codes

    Get PDF
    International audienceIn this paper we propose a low-rate coding method, suited for application-layer forward error correction. Depending on channel conditions, the coding scheme we propose can switch from a fixed-rate LDPC code to various low-rate GLDPC codes. The source symbols are ï¬rst encoded by using a staircase or triangular LDPC code. If additional symbols are needed, the encoder is then switched to the GLDPC mode and extra-repair symbols are produced, on demand. In order to ensure small overheads, we consider irregular distributions of extra-repair symbols optimized by density evolution techniques. We also show that increasing the number of extra-repair symbols improves the successful decoding probability, which becomes very close to 1for sufficiently many extra-repair symbols

    Refined Reliability Combining for Binary Message Passing Decoding of Product Codes

    Get PDF
    We propose a novel soft-aided iterative decoding algorithm for product codes (PCs). The proposed algorithm, named iterative bounded distance decoding with combined reliability (iBDD-CR), enhances the conventional iterative bounded distance decoding (iBDD) of PCs by exploiting some level of soft information. In particular, iBDD-CR can be seen as a modification of iBDD where the hard decisions of the row and column decoders are made based on a reliability estimate of the BDD outputs. The reliability estimates are derived using extrinsic message passing for generalized low-density-parity check (GLDPC) ensembles, which encompass PCs. We perform a density evolution analysis of iBDD-CR for transmission over the additive white Gaussian noise channel for the GLDPC ensemble. We consider both binary transmission and bit-interleaved coded modulation with quadrature amplitude modulation.We show that iBDD-CR achieves performance gains up to 0.510.51 dB compared to iBDD with the same internal decoder data flow. This makes the algorithm an attractive solution for very high-throughput applications such as fiber-optic communications

    On hard-decision forward error correction with application to high-throughput fiber-optic communications

    Get PDF
    The advent of the Internet not only changed the communication methods significantly, but also the life-style of the human beings. The number of Internet users has grown exponentially in the last decade, and the number of users exceeded 3.4 billion in 2016. Fiber links serve as the Internet backbone, hence, the fast grow of the Internet network and the sheer of new applications is highly driven by advances in optical communications. The emergence of coherent optical systems has led to a more efficient use of the available spectrum compared to traditional on-off keying transmission, and has made it possible to increase the supported data rates. To achieve high spectral efficiencies and improve the transmission reach, coding in combination with a higher order modulation, a scheme known as coded modulation (CM), has become indispensable in fiber-optic communications. In the recent years, graph-based codes such as low-density parity-check codes and soft decision decoding (SDD) have been adopted for long-haul coherent optical systems. SDD yields very high net coding gains but at the expense of a relatively high decoding complexity, which brings implementation challenges at very high data rates. Hard decision decoding (HDD) is an appealing alternative that reduces the decoding complexity. This motivates the focus of this thesis on forward error correction (FEC) with HDD for high-throughput, low power fiber-optic communications.In this thesis, we start by studying the performance bounds of HDD. In particular, we derive achievable information rates (AIRs) for CM with HDD for both bit-wise and symbol-wise decoding, and show that bit-wise HDD yields significantly higher AIRs. We also design nonbinary staircase codes using density evolution. Finite length simulation results of binary and nonbinary staircase codes corroborate the conclusions arising from the AIR analysis, i.e., for HDD binary codes are preferable. Then, we consider probabilistic shaping. In particular, we extend the probabilistic amplitude shaping (PAS) scheme recently introduced by B\uf6cherer et al. to HDD based on staircase codes. Finally, we focus on new decoding algorithms for product-like codes to close the gap between HDD and SDD, while keeping the decoding complexity low. In particular, we propose three novel decoding algorithms for product-like codes based on assisting the HDD with some level of soft information. The proposed algorithms provide a clear performance-complexity tradeoff. In particular, we show that up to roughly half of the gap between SDD and HDD can be closed with limited complexity increase with respect to HDD

    Approaching Capacity at High-Rates with Iterative Hard-Decision Decoding

    Full text link
    A variety of low-density parity-check (LDPC) ensembles have now been observed to approach capacity with message-passing decoding. However, all of them use soft (i.e., non-binary) messages and a posteriori probability (APP) decoding of their component codes. In this paper, we show that one can approach capacity at high rates using iterative hard-decision decoding (HDD) of generalized product codes. Specifically, a class of spatially-coupled GLDPC codes with BCH component codes is considered, and it is observed that, in the high-rate regime, they can approach capacity under the proposed iterative HDD. These codes can be seen as generalized product codes and are closely related to braided block codes. An iterative HDD algorithm is proposed that enables one to analyze the performance of these codes via density evolution (DE).Comment: 22 pages, this version accepted to the IEEE Transactions on Information Theor

    Generalized Spatially-Coupled Parallel Concatenated Codes With Partial Repetition

    Get PDF
    A new class of spatially-coupled turbo-like codes (SC-TCs), dubbed generalized spatially coupled parallel concatenated codes (GSC-PCCs), is introduced. These codes are constructed by applying spatial coupling on parallel concatenated codes (PCCs) with a fraction of information bits repeated q times. GSC-PCCs can be seen as a generalization of the original spatially-coupled parallel concatenated codes proposed by Moloudi et al. [2]. To characterize the asymptotic performance of GSC-PCCs, we derive the corresponding density evolution equations and compute their decoding thresholds. The threshold saturation effect is observed and proven. Most importantly, we rigorously prove that the rate-R GSC-PCC ensemble with 2-state convolutional component codes achieves at least a fraction 1-R/R+q of the capacity of the binary erasure channel (BEC) for repetition factor q ≥ 2 and this multiplicative gap vanishes as q tends to infinity. To the best of our knowledge, this is the first class of SC-TCs that are proven to be capacity-achieving. Further, the connection between the strength of the component codes, the decoding thresholds of GSC-PCCs, and the repetition factor is established. The superiority of the proposed codes with finite blocklength is exemplified by comparing their error performance with that of existing SC-TCs via computer simulations

    Spatially Coupled Turbo-Like Codes

    Get PDF
    The focus of this thesis is on proposing and analyzing a powerful class of codes on graphs---with trellis constraints---that can simultaneously approach capacity and achieve very low error floor. In particular, we propose the concept of spatial coupling for turbo-like code (SC-TC) ensembles and investigate the impact of coupling on the performance of these codes. The main elements of this study can be summarized by the following four major topics. First, we considered the spatial coupling of parallel concatenated codes (PCCs), serially concatenated codes (SCCs), and hybrid concatenated codes (HCCs).We also proposed two extensions of braided convolutional codes (BCCs) to higher coupling memories. Second, we investigated the impact of coupling on the asymptotic behavior of the proposed ensembles in term of the decoding thresholds. For that, we derived the exact density evolution (DE) equations of the proposed SC-TC ensembles over the binary erasure channel. Using the DE equations, we found the thresholds of the coupled and uncoupled ensembles under belief propagation (BP) decoding for a wide range of rates. We also computed the maximum a-posteriori (MAP) thresholds of the underlying uncoupled ensembles. Our numerical results confirm that TCs have excellent MAP thresholds, and for a large enough coupling memory, the BP threshold of an SC-TC ensemble improves to the MAP threshold of the underlying TC ensemble. This phenomenon is called threshold saturation and we proved its occurrence for SC-TCs by use of a proof technique based on the potential function of the ensembles.Third, we investigated and discussed the performance of SC-TCs in the finite length regime. We proved that under certain conditions the minimum distance of an SC-TCs is either larger or equal to that of its underlying uncoupled ensemble. Based on this fact, we performed a weight enumerator (WE) analysis for the underlying uncoupled ensembles to investigate the error floor performance of the SC-TC ensembles. We computed bounds on the error rate performance and minimum distance of the TC ensembles. These bounds indicate very low error floor for SCC, HCC, and BCC ensembles, and show that for HCC, and BCC ensembles, the minimum distance grows linearly with the input block length.The results from the DE and WE analysis demonstrate that the performance of TCs benefits from spatial coupling in both waterfall and error floor regions. While uncoupled TC ensembles with close-to-capacity performance exhibit a high error floor, our results show that SC-TCs can simultaneously approach capacity and achieve very low error floor.Fourth, we proposed a unified ensemble of TCs that includes all the considered TC classes. We showed that for each of the original classes of TCs, it is possible to find an equivalent ensemble by proper selection of the design parameters in the unified ensemble. This unified ensemble not only helps us to understand the connections and trade-offs between the TC ensembles but also can be considered as a bridge between TCs and generalized low-density parity check codes
    corecore