3,188 research outputs found

    Irregular Turbo Codes in Block-Fading Channels

    Full text link
    We study irregular binary turbo codes over non-ergodic block-fading channels. We first propose an extension of channel multiplexers initially designed for regular turbo codes. We then show that, using these multiplexers, irregular turbo codes that exhibit a small decoding threshold over the ergodic Gaussian-noise channel perform very close to the outage probability on block-fading channels, from both density evolution and finite-length perspectives.Comment: to be presented at the IEEE International Symposium on Information Theory, 201

    Design of rate-compatible structured LDPC codes for hybrid ARQ applications

    Get PDF
    In this paper, families of rate-compatible protograph-based LDPC codes that are suitable for incremental-redundancy hybrid ARQ applications are constructed. A systematic technique to construct low-rate base codes from a higher rate code is presented. The base codes are designed to be robust against erasures while having a good performance on error channels. A progressive node puncturing algorithm is devised to construct a family of higher rate codes from the base code. The performance of this puncturing algorithm is compared to other puncturing schemes. Using the techniques in this paper, one can construct a rate-compatible family of codes with rates ranging from 0.1 to 0.9 that are within 1 dB from the channel capacity and have good error floors

    Performance Study of a Class of Irregular Near Capacity Achieving LDPC Codes

    Get PDF
    This paper investigates the performance of a class of irregular low-density parity-check (LDPC) codes through a recently published low complexity upper bound on their belief propagation decoding thresholds. Moreover, their performance analysis is carried out through a recently published algorithmic method, presented in Babich et al. 2017 paper. In particular, the class considered is characterized by variable node degree distributions (lambda(x)) of minimum degree (i_1 gt 2): being, in this case, (lambda^{\u27} (0)=lambda_2=0), this is useful to design LDPC codes presenting a linear minimum distance growth with the block length with probability 1, as shown in Di et al.’s 2006 paper. These codes unfortunately cannot reach capacity under iterative decoding, since the achievement of capacity requires (lambda_2 neq 0). However, in this latter case, the block error probability might converge to a constant, as shown in the aforementioned paper

    Sparse graph-based coding schemes for continuous phase modulations

    Get PDF
    The use of the continuous phase modulation (CPM) is interesting when the channel represents a strong non-linearity and in the case of limited spectral support; particularly for the uplink, where the satellite holds an amplifier per carrier, and for downlinks where the terminal equipment works very close to the saturation region. Numerous studies have been conducted on this issue but the proposed solutions use iterative CPM demodulation/decoding concatenated with convolutional or block error correcting codes. The use of LDPC codes has not yet been introduced. Particularly, no works, to our knowledge, have been done on the optimization of sparse graph-based codes adapted for the context described here. In this study, we propose to perform the asymptotic analysis and the design of turbo-CPM systems based on the optimization of sparse graph-based codes. Moreover, an analysis on the corresponding receiver will be done

    Design of rate-compatible structured LDPC codes for hybrid ARQ applications

    Full text link

    Design of serially-concatenated LDGM codes

    Get PDF
    [Resumen] Since Shannon demonstrated in 1948 the feasibility of achieving an arbitrarily low error probability in a communications system provided that the transmission rate was kept below a certain limit, one of the greatest challenges in the realm of digital communications and, more specifically, in the channel coding field, has been finding codes that are able to approach this limit as much as possible with a reasonable encoding and decoding complexity, However, it was not until 1993, when Berrou et al. presented the turbo codes, that a coding scheme capable of performing at less than 1dB from Shannon's limit with an extremely low error probability was found. The idea on which these codes are based is the iterative decoding of concatenated components that exchange information about the transmitted bits, which is known as the "turbo principle". The generalization of this idea led in 1995 to the rediscovery of LDPC (Low Density Parity Check) codes, proposed for the first time by Gallager in the 60s. LDPC codes are linear block codes with a sparse parity check matrix that are able to surpass the performance of turbo codes with a smaller decoding complexity. However, due to the fact that the generator matrix of general LDPC codes is not sparse, their encoding complexity can be excessively high. LDGM (Low Density Generator Matrix) codes, a particular case of LDPC codes, are codes with a sparse generator matrix, thanks to which they present a lower encoding complexity. However, except for the case of very high rate codes, LDGM codes are "bad", i.e., they have a non-zero error probability that is independent of the code block length. More recently, IRA (Irregular Repeat-Accumulated) codes, consisting of the serial concatenation of a LDGM code and an accumulator, have been proposed. IRA codes are able to get close to the performance of LDPC codes with an encoding complexity similar to that of LDGM codes. In this thesis we explore an alternative to IRA codes consisting in the serial concatenation of two LDGM codes, a scheme that we will denote SCLDGM (Serially-Concatenated Low-Density Generator Matrix). The basic premise of SCLDGM codes is that an inner code of rate close to the desired transmission rate fixes most of the errors, and an external code of rate close to one corrects the few errors that result from decoding the inner code. For any of these schemes to perform as close as possible to the capacity limit it is necessary to determine the code parameters that best fit the channel over which the transmission will be done. The two techniques most commonly used in the literature to optimize LDPC codes are Density Evolution (DE) and EXtrinsic Information Transfer (EXIT) charts, which have been employed to obtain optimized codes that perform at a few tenths of a decibel of the AWGN channel capacity. However, no optimization techniques have been presented for SCLDGM codes, which so far have been designed heuristically and therefore their performance is far from the performance achieved by IRA and LDPC codes. Other of the most important advances that have occurred in recent years is the utilization of multiple antennas at the trasmitter and the receiver, which is known as MIMO (Multiple-Input Multiple-Output) systems. Telatar showed that the channel capacity in these kind of systems scales linearly with the minimum number of transmit and receive antennas, which enables us to achieve spectral efficiencies far greater than with systems with a single transmit and receive antenna (or Single Input Single Output (SISO) systems). This important advantage has attracted a lot of attention from the research community, and has caused that many of the new standards, such as WiMax 802.16e or WiFi 802.11n, as well as future 4G systems are based on MIMO systems. The main problem of MIMO systems is the high complexity of optimum detection, which grows exponentially with the number of transmit antennas and the number of modulation levels. Several suboptimum algorithms have been proposed to reduce this complexity, most notably the SIC-MMSE (Soft-Interference Cancellation Minimum Mean Square Error) and spherical detectors. Another major issue is the high complexity of the channel estimation, due to the large number of coefficients which determine it. There are techniques, such as Maximum-Likelihood-Expectation-Maximization (ML-EM), that have been successfully applied to estimate MIMO channels but, as in the case of detection, they suffer from the problem of a very high complexity when the number of transmit antennas or the size of the constellation increase. The main objective of this work is the study and optimization of SCLDGM codes in SISO and MIMO channels. To this end, we propose an optimization method for SCLDGM codes based on EXIT charts that allow these codes to exceed the performance of IRA codes existing in the literature and get close to the performance of LDPC codes, with the advantage over the latter of a lower encoding complexity. We also propose optimized SCLDGM codes for both spherical and SIC-MMSE suboptimal MIMO detectors, constituting a system that is capable of approaching the capacity limits of MIMO channels with a low complexity encoding, detection and decoding. We analyze the BICM (Bit-Interleaved Coded Modulation) scheme and the concatenation of SCLDGM codes with Space-Time Codes (STC) in ergodic and quasi-static MIMO channels. Furthermore, we explore the combination of these codes with different channel estimation algorithms that will take advantage of the low complexity of the suboptimum detectors to reduce the complexity of the estimation process while keeping a low distance to the capacity limit. Finally, we propose coding schemes for low rates involving the serial concatenation of several LDGM codes, reducing the complexity of recently proposed schemes based on Hadamard codes

    Concatenated Polar Codes and Joint Source-Channel Decoding

    Get PDF
    In this dissertation, we mainly address two issues: 1. improving the finite-length performance of capacity-achieving polar codes; 2. use polar codes to efficiently exploit the source redundancy to improve the reliability of the data storage system. In the first part of the dissertation, we propose interleaved concatenation schemes of polar codes with outer binary BCH and convolutional codes to improve the finite-length performance of polar codes. For asymptotically long blocklength, we show that our schemes achieve exponential error decay rate which is much larger than the sub-exponential decay rate of standalone polar codes. In practice we show by simulation that our schemes outperform stand-alone polar codes decoded with successive cancellation or belief propagation decoding. The performance of concatenated polar and convolutional codes can be comparable to stand-alone polar codes with list decoding in the high signal to noise ratio regime. In addition to this, we show that the proposed concatenation schemes require lower memory and decoding complexity in comparison to belief propagation and list decoding of polar codes. With the proposed schemes, polar codes are able to strike a good balance between performance, memory and decoding complexity. The second part of the dissertation is devoted to improving the decoding performance of polar codes where there is leftover redundancy after source compression. We focus on language-based sources, and propose a joint-source channel decoding scheme for polar codes. We show that if the language decoder is modeled as erasure correcting outer block codes, the rate of inner polar codes can be improved while still guaranteeing a vanishing probability of error. The improved rate depends on the frozen bit distribution of polar codes and we provide a formal proof for the convergence of that distribution. Both lower bound and maximum improved rate analysis are provided. To compare with the non-iterative joint list decoding scheme for polar codes, we study a joint iterative decoding scheme with graph codes. In particular, irregular repeat accumulate codes are exploited because of low encoding/decoding complexity and capacity achieving property for the binary erasure channel. We propose how to design optimal irregular repeat accumulate codes with different models of language decoder. We show that our scheme achieves improved decoding thresholds. A comparison of joint polar decoding and joint irregular repeat accumulate decoding is given
    corecore