642 research outputs found

    Constructions of Generalized Concatenated Codes and Their Trellis-Based Decoding Complexity

    Get PDF
    In this correspondence, constructions of generalized concatenated (GC) codes with good rates and distances are presented. Some of the proposed GC codes have simpler trellis omplexity than Euclidean geometry (EG), Reed–Muller (RM), or Bose–Chaudhuri–Hocquenghem (BCH) codes of approximately the same rates and minimum distances, and in addition can be decoded with trellis-based multistage decoding up to their minimum distances. Several codes of the same length, dimension, and minimum distance as the best linear codes known are constructed

    The trellis complexity of convolutional codes

    Get PDF
    Convolutional codes have a natural, regular, trellis structure that facilitates the implementation of Viterbi's algorithm. Linear block codes also have a natural, though not in general a regular, “minimal” trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of an unenhanced Viterbi decoding algorithm can be accurately estimated by the number of trellis edge symbols per encoded bit. It would therefore appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations which are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the “minimal” trellis representation. Thus ironically, we seem to know more about the minimal trellis representation for block than for convolutional codes. We provide a remedy, by developing a theory of minimal trellises for convolutional codes. This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-canonical generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small

    Near-Capacity Turbo Trellis Coded Modulation Design

    No full text
    Bandwidth efficient parallel-concatenated Turbo Trellis Coded Modulation (TTCM) schemes were designed for communicating over uncorrelated Rayleigh fading channels. A symbol-based union bound was derived for analysing the error floor of the proposed TTCM schemes. A pair of In-phase (I) and Quadrature-phase (Q) interleavers were employed for interleaving the I and Q components of the TTCM coded symbols, in order to attain an increased diversity gain. The decoding convergence of the IQ-TTCM schemes was analysed using symbol based EXtrinsic Information Transfer (EXIT) charts. The best TTCM component codes were selected with the aid of both the symbol-based union bound and non-binary EXIT charts for the sake of designing capacity-approaching IQ-TTCM schemes in the context of 8PSK, 16QAM and 32QAM signal sets. It will be shown that our TTCM design is capable of approaching the channel capacity within 0.5 dB at a throughput of 4 bit/s/Hz, when communicating over uncorrelated Rayleigh fading channels using 32QAM

    Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes

    Get PDF
    In a coded communication system with equiprobable signaling, MLD minimizes the word error probability and delivers the most likely codeword associated with the corresponding received sequence. This decoding has two drawbacks. First, minimization of the word error probability is not equivalent to minimization of the bit error probability. Therefore, MLD becomes suboptimum with respect to the bit error probability. Second, MLD delivers a hard-decision estimate of the received sequence, so that information is lost between the input and output of the ML decoder. This information is important in coded schemes where the decoded sequence is further processed, such as concatenated coding schemes, multi-stage and iterative decoding schemes. In this chapter, we first present a decoding algorithm which both minimizes bit error probability, and provides the corresponding soft information at the output of the decoder. This algorithm is referred to as the MAP (maximum aposteriori probability) decoding algorithm

    Self-concatenated code design and its application in power-efficient cooperative communications

    No full text
    In this tutorial, we have focused on the design of binary self-concatenated coding schemes with the help of EXtrinsic Information Transfer (EXIT) charts and Union bound analysis. The design methodology of future iteratively decoded self-concatenated aided cooperative communication schemes is presented. In doing so, we will identify the most important milestones in the area of channel coding, concatenated coding schemes and cooperative communication systems till date and suggest future research directions
    corecore