1,859 research outputs found

    Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes

    Get PDF
    A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and sectionalization of trellises. Chapter 7 discusses trellis decomposition and subtrellises for low-weight codewords. Chapter 8 first presents well known methods for constructing long powerful codes from short component codes or component codes of smaller dimensions, and then provides methods for constructing their trellises which include Shannon and Cartesian product techniques. Chapter 9 deals with convolutional codes, puncturing, zero-tail termination and tail-biting.Chapters 10 through 13 present various trellis-based decoding algorithms, old and new. Chapter 10 first discusses the application of the well known Viterbi decoding algorithm to linear block codes, optimum sectionalization of a code trellis to minimize computation complexity, and design issues for IC (integrated circuit) implementation of a Viterbi decoder. Then it presents a new decoding algorithm for convolutional codes, named Differential Trellis Decoding (DTD) algorithm. Chapter 12 presents a suboptimum reliability-based iterative decoding algorithm with a low-weight trellis search for the most likely codeword. This decoding algorithm provides a good trade-off between error performance and decoding complexity. All the decoding algorithms presented in Chapters 10 through 12 are devised to minimize word error probability. Chapter 13 presents decoding algorithms that minimize bit error probability and provide the corresponding soft (reliability) information at the output of the decoder. Decoding algorithms presented are the MAP (maximum a posteriori probability) decoding algorithm and the Soft-Output Viterbi Algorithm (SOVA) algorithm. Finally, the minimization of bit error probability in trellis-based MLD is discussed

    Belief Propagation Decoding of Polar Codes on Permuted Factor Graphs

    Full text link
    We show that the performance of iterative belief propagation (BP) decoding of polar codes can be enhanced by decoding over different carefully chosen factor graph realizations. With a genie-aided stopping condition, it can achieve the successive cancellation list (SCL) decoding performance which has already been shown to achieve the maximum likelihood (ML) bound provided that the list size is sufficiently large. The proposed decoder is based on different realizations of the polar code factor graph with randomly permuted stages during decoding. Additionally, a different way of visualizing the polar code factor graph is presented, facilitating the analysis of the underlying factor graph and the comparison of different graph permutations. In our proposed decoder, a high rate Cyclic Redundancy Check (CRC) code is concatenated with a polar code and used as an iteration stopping criterion (i.e., genie) to even outperform the SCL decoder of the plain polar code (without the CRC-aid). Although our permuted factor graph-based decoder does not outperform the SCL-CRC decoder, it achieves, to the best of our knowledge, the best performance of all iterative polar decoders presented thus far.Comment: in IEEE Wireless Commun. and Networking Conf. (WCNC), April 201

    Low-Floor Tanner Codes via Hamming-Node or RSCC-Node Doping

    Get PDF
    We study the design of structured Tanner codes with low error-rate floors on the AWGN channel. The design technique involves the “doping” of standard LDPC (proto-)graphs, by which we mean Hamming or recursive systematic convolutional (RSC) code constraints are used together with single-parity-check (SPC) constraints to construct a code’s protograph. We show that the doping of a “good” graph with Hamming or RSC codes is a pragmatic approach that frequently results in a code with a good threshold and very low error-rate floor. We focus on low-rate Tanner codes, in part because the design of low-rate, low-floor LDPC codes is particularly difficult. Lastly, we perform a simple complexity analysis of our Tanner codes and examine the performance of lower-complexity, suboptimal Hamming-node decoders
    corecore