3,518 research outputs found

    On complexity of trellis structure of linear block codes

    Get PDF
    The trellis structure of linear block codes (LBCs) is discussed. The state and branch complexities of a trellis diagram (TD) for a LBC is investigated. The TD with the minimum number of states is said to be minimal. The branch complexity of a minimal TD for a LBC is expressed in terms of the dimensions of specific subcodes of the given code. Then upper and lower bounds are derived on the number of states of a minimal TD for a LBC, and it is shown that a cyclic (or shortened cyclic) code is the worst in terms of the state complexity among the LBCs of the same length and dimension. Furthermore, it is shown that the structural complexity of a minimal TD for a LBC depends on the order of its bit positions. This fact suggests that an appropriate permutation of the bit positions of a code may result in an equivalent code with a much simpler minimal TD. Boolean polynomial representation of codewords of a LBC is also considered. This representation helps in study of the trellis structure of the code. Boolean polynomial representation of a code is applied to construct its minimal TD. Particularly, the construction of minimal trellises for Reed-Muller codes and the extended and permuted binary primitive BCH codes which contain Reed-Muller as subcodes is emphasized. Finally, the structural complexity of minimal trellises for the extended and permuted, and double-error-correcting BCH codes is analyzed and presented. It is shown that these codes have relatively simple trellis structure and hence can be decoded with the Viterbi decoding algorithm

    Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes

    Get PDF
    A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and sectionalization of trellises. Chapter 7 discusses trellis decomposition and subtrellises for low-weight codewords. Chapter 8 first presents well known methods for constructing long powerful codes from short component codes or component codes of smaller dimensions, and then provides methods for constructing their trellises which include Shannon and Cartesian product techniques. Chapter 9 deals with convolutional codes, puncturing, zero-tail termination and tail-biting.Chapters 10 through 13 present various trellis-based decoding algorithms, old and new. Chapter 10 first discusses the application of the well known Viterbi decoding algorithm to linear block codes, optimum sectionalization of a code trellis to minimize computation complexity, and design issues for IC (integrated circuit) implementation of a Viterbi decoder. Then it presents a new decoding algorithm for convolutional codes, named Differential Trellis Decoding (DTD) algorithm. Chapter 12 presents a suboptimum reliability-based iterative decoding algorithm with a low-weight trellis search for the most likely codeword. This decoding algorithm provides a good trade-off between error performance and decoding complexity. All the decoding algorithms presented in Chapters 10 through 12 are devised to minimize word error probability. Chapter 13 presents decoding algorithms that minimize bit error probability and provide the corresponding soft (reliability) information at the output of the decoder. Decoding algorithms presented are the MAP (maximum a posteriori probability) decoding algorithm and the Soft-Output Viterbi Algorithm (SOVA) algorithm. Finally, the minimization of bit error probability in trellis-based MLD is discussed

    Constructions of Generalized Concatenated Codes and Their Trellis-Based Decoding Complexity

    Get PDF
    In this correspondence, constructions of generalized concatenated (GC) codes with good rates and distances are presented. Some of the proposed GC codes have simpler trellis omplexity than Euclidean geometry (EG), Reed–Muller (RM), or Bose–Chaudhuri–Hocquenghem (BCH) codes of approximately the same rates and minimum distances, and in addition can be decoded with trellis-based multistage decoding up to their minimum distances. Several codes of the same length, dimension, and minimum distance as the best linear codes known are constructed

    Trellis decoding complexity of linear block codes

    Get PDF
    In this partially tutorial paper, we examine minimal trellis representations of linear block codes and analyze several measures of trellis complexity: maximum state and edge dimensions, total span length, and total vertices, edges and mergers. We obtain bounds on these complexities as extensions of well-known dimension/length profile (DLP) bounds. Codes meeting these bounds minimize all the complexity measures simultaneously; conversely, a code attaining the bound for total span length, vertices, or edges, must likewise attain it for all the others. We define a notion of “uniform” optimality that embraces different domains of optimization, such as different permutations of a code or different codes with the same parameters, and we give examples of uniformly optimal codes and permutations. We also give some conditions that identify certain cases when no code or permutation can meet the bounds. In addition to DLP-based bounds, we derive new inequalities relating one complexity measure to another, which can be used in conjunction with known bounds on one measure to imply bounds on the others. As an application, we infer new bounds on maximum state and edge complexity and on total vertices and edges from bounds on span lengths

    The trellis complexity of convolutional codes

    Get PDF
    Convolutional codes have a natural, regular, trellis structure that facilitates the implementation of Viterbi's algorithm. Linear block codes also have a natural, though not in general a regular, “minimal” trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of an unenhanced Viterbi decoding algorithm can be accurately estimated by the number of trellis edge symbols per encoded bit. It would therefore appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations which are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the “minimal” trellis representation. Thus ironically, we seem to know more about the minimal trellis representation for block than for convolutional codes. We provide a remedy, by developing a theory of minimal trellises for convolutional codes. This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-canonical generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small
    corecore