763 research outputs found

    The trellis complexity of convolutional codes

    Get PDF
    Convolutional codes have a natural, regular, trellis structure that facilitates the implementation of Viterbi's algorithm. Linear block codes also have a natural, though not in general a regular, “minimal” trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of an unenhanced Viterbi decoding algorithm can be accurately estimated by the number of trellis edge symbols per encoded bit. It would therefore appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations which are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the “minimal” trellis representation. Thus ironically, we seem to know more about the minimal trellis representation for block than for convolutional codes. We provide a remedy, by developing a theory of minimal trellises for convolutional codes. This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-canonical generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small

    Convolutional Codes in Rank Metric with Application to Random Network Coding

    Full text link
    Random network coding recently attracts attention as a technique to disseminate information in a network. This paper considers a non-coherent multi-shot network, where the unknown and time-variant network is used several times. In order to create dependencies between the different shots, particular convolutional codes in rank metric are used. These codes are so-called (partial) unit memory ((P)UM) codes, i.e., convolutional codes with memory one. First, distance measures for convolutional codes in rank metric are shown and two constructions of (P)UM codes in rank metric based on the generator matrices of maximum rank distance codes are presented. Second, an efficient error-erasure decoding algorithm for these codes is presented. Its guaranteed decoding radius is derived and its complexity is bounded. Finally, it is shown how to apply these codes for error correction in random linear and affine network coding.Comment: presented in part at Netcod 2012, submitted to IEEE Transactions on Information Theor

    Convolutional and tail-biting quantum error-correcting codes

    Full text link
    Rate-(n-2)/n unrestricted and CSS-type quantum convolutional codes with up to 4096 states and minimum distances up to 10 are constructed as stabilizer codes from classical self-orthogonal rate-1/n F_4-linear and binary linear convolutional codes, respectively. These codes generally have higher rate and less decoding complexity than comparable quantum block codes or previous quantum convolutional codes. Rate-(n-2)/n block stabilizer codes with the same rate and error-correction capability and essentially the same decoding algorithms are derived from these convolutional codes via tail-biting.Comment: 30 pages. Submitted to IEEE Transactions on Information Theory. Minor revisions after first round of review
    corecore