618 research outputs found

    Progressive Differences Convolutional Low-Density Parity-Check Codes

    Full text link
    We present a new family of low-density parity-check (LDPC) convolutional codes that can be designed using ordered sets of progressive differences. We study their properties and define a subset of codes in this class that have some desirable features, such as fixed minimum distance and Tanner graphs without short cycles. The design approach we propose ensures that these properties are guaranteed independently of the code rate. This makes these codes of interest in many practical applications, particularly when high rate codes are needed for saving bandwidth. We provide some examples of coded transmission schemes exploiting this new class of codes.Comment: 8 pages, 2 figures. Accepted for publication in IEEE Communications Letters. Copyright transferred to IEE

    Concatenated Turbo/LDPC codes for deep space communications: performance and implementation

    Get PDF
    Deep space communications require error correction codes able to reach extremely low bit-error-rates, possibly with a steep waterfall region and without error floor. Several schemes have been proposed in the literature to achieve these goals. Most of them rely on the concatenation of different codes that leads to high hardware implementation complexity and poor resource sharing. This work proposes a scheme based on the concatenation of non-custom LDPC and turbo codes that achieves excellent error correction performance. Moreover, since both LDPC and turbo codes can be decoded with the BCJR algorithm, our preliminary results show that an efficient hardware architecture with high resource reuse can be designe

    Good Concatenated Code Ensembles for the Binary Erasure Channel

    Full text link
    In this work, we give good concatenated code ensembles for the binary erasure channel (BEC). In particular, we consider repeat multiple-accumulate (RMA) code ensembles formed by the serial concatenation of a repetition code with multiple accumulators, and the hybrid concatenated code (HCC) ensembles recently introduced by Koller et al. (5th Int. Symp. on Turbo Codes & Rel. Topics, Lausanne, Switzerland) consisting of an outer multiple parallel concatenated code serially concatenated with an inner accumulator. We introduce stopping sets for iterative constituent code oriented decoding using maximum a posteriori erasure correction in the constituent codes. We then analyze the asymptotic stopping set distribution for RMA and HCC ensembles and show that their stopping distance hmin, defined as the size of the smallest nonempty stopping set, asymptotically grows linearly with the block length. Thus, these code ensembles are good for the BEC. It is shown that for RMA code ensembles, contrary to the asymptotic minimum distance dmin, whose growth rate coefficient increases with the number of accumulate codes, the hmin growth rate coefficient diminishes with the number of accumulators. We also consider random puncturing of RMA code ensembles and show that for sufficiently high code rates, the asymptotic hmin does not grow linearly with the block length, contrary to the asymptotic dmin, whose growth rate coefficient approaches the Gilbert-Varshamov bound as the rate increases. Finally, we give iterative decoding thresholds for the different code ensembles to compare the convergence properties.Comment: To appear in IEEE Journal on Selected Areas in Communications, special issue on Capacity Approaching Code

    Self-concatenated code design and its application in power-efficient cooperative communications

    No full text
    In this tutorial, we have focused on the design of binary self-concatenated coding schemes with the help of EXtrinsic Information Transfer (EXIT) charts and Union bound analysis. The design methodology of future iteratively decoded self-concatenated aided cooperative communication schemes is presented. In doing so, we will identify the most important milestones in the area of channel coding, concatenated coding schemes and cooperative communication systems till date and suggest future research directions

    Trapping Set Enumerators for Repeat Multiple Accumulate Code Ensembles

    Full text link
    The serial concatenation of a repetition code with two or more accumulators has the advantage of a simple encoder structure. Furthermore, the resulting ensemble is asymptotically good and exhibits minimum distance growing linearly with block length. However, in practice these codes cannot be decoded by a maximum likelihood decoder, and iterative decoding schemes must be employed. For low-density parity-check codes, the notion of trapping sets has been introduced to estimate the performance of these codes under iterative message passing decoding. In this paper, we present a closed form finite length ensemble trapping set enumerator for repeat multiple accumulate codes by creating a trellis representation of trapping sets. We also obtain the asymptotic expressions when the block length tends to infinity and evaluate them numerically.Comment: 5 pages, to appear in proc. IEEE ISIT, June 200

    VLSI single-chip (255,223) Reed-Solomon encoder with interleaver

    Get PDF
    The invention relates to a concatenated Reed-Solomon/convolutional encoding system consisting of a Reed-Solomon outer code and a convolutional inner code for downlink telemetry in space missions, and more particularly to a Reed-Solomon encoder with programmable interleaving of the information symbols and code correction symbols to combat error bursts in the Viterbi decoder

    Analysis and Design of Tuned Turbo Codes

    Get PDF
    It has been widely observed that there exists a fundamental trade-off between the minimum (Hamming) distance properties and the iterative decoding convergence behavior of turbo-like codes. While capacity achieving code ensembles typically are asymptotically bad in the sense that their minimum distance does not grow linearly with block length, and they therefore exhibit an error floor at moderate-to-high signal to noise ratios, asymptotically good codes usually converge further away from channel capacity. In this paper, we introduce the concept of tuned turbo codes, a family of asymptotically good hybrid concatenated code ensembles, where asymptotic minimum distance growth rates, convergence thresholds, and code rates can be traded-off using two tuning parameters, {\lambda} and {\mu}. By decreasing {\lambda}, the asymptotic minimum distance growth rate is reduced in exchange for improved iterative decoding convergence behavior, while increasing {\lambda} raises the asymptotic minimum distance growth rate at the expense of worse convergence behavior, and thus the code performance can be tuned to fit the desired application. By decreasing {\mu}, a similar tuning behavior can be achieved for higher rate code ensembles.Comment: Accepted for publication in IEEE Transactions on Information Theor
    corecore