9,220 research outputs found

    Trellis decoding complexity of linear block codes

    Get PDF
    In this partially tutorial paper, we examine minimal trellis representations of linear block codes and analyze several measures of trellis complexity: maximum state and edge dimensions, total span length, and total vertices, edges and mergers. We obtain bounds on these complexities as extensions of well-known dimension/length profile (DLP) bounds. Codes meeting these bounds minimize all the complexity measures simultaneously; conversely, a code attaining the bound for total span length, vertices, or edges, must likewise attain it for all the others. We define a notion of “uniform” optimality that embraces different domains of optimization, such as different permutations of a code or different codes with the same parameters, and we give examples of uniformly optimal codes and permutations. We also give some conditions that identify certain cases when no code or permutation can meet the bounds. In addition to DLP-based bounds, we derive new inequalities relating one complexity measure to another, which can be used in conjunction with known bounds on one measure to imply bounds on the others. As an application, we infer new bounds on maximum state and edge complexity and on total vertices and edges from bounds on span lengths

    Permutation Decoding and the Stopping Redundancy Hierarchy of Cyclic and Extended Cyclic Codes

    Full text link
    We introduce the notion of the stopping redundancy hierarchy of a linear block code as a measure of the trade-off between performance and complexity of iterative decoding for the binary erasure channel. We derive lower and upper bounds for the stopping redundancy hierarchy via Lovasz's Local Lemma and Bonferroni-type inequalities, and specialize them for codes with cyclic parity-check matrices. Based on the observed properties of parity-check matrices with good stopping redundancy characteristics, we develop a novel decoding technique, termed automorphism group decoding, that combines iterative message passing and permutation decoding. We also present bounds on the smallest number of permutations of an automorphism group decoder needed to correct any set of erasures up to a prescribed size. Simulation results demonstrate that for a large number of algebraic codes, the performance of the new decoding method is close to that of maximum likelihood decoding.Comment: 40 pages, 6 figures, 10 tables, submitted to IEEE Transactions on Information Theor

    Shortened Array Codes of Large Girth

    Full text link
    One approach to designing structured low-density parity-check (LDPC) codes with large girth is to shorten codes with small girth in such a manner that the deleted columns of the parity-check matrix contain all the variables involved in short cycles. This approach is especially effective if the parity-check matrix of a code is a matrix composed of blocks of circulant permutation matrices, as is the case for the class of codes known as array codes. We show how to shorten array codes by deleting certain columns of their parity-check matrices so as to increase their girth. The shortening approach is based on the observation that for array codes, and in fact for a slightly more general class of LDPC codes, the cycles in the corresponding Tanner graph are governed by certain homogeneous linear equations with integer coefficients. Consequently, we can selectively eliminate cycles from an array code by only retaining those columns from the parity-check matrix of the original code that are indexed by integer sequences that do not contain solutions to the equations governing those cycles. We provide Ramsey-theoretic estimates for the maximum number of columns that can be retained from the original parity-check matrix with the property that the sequence of their indices avoid solutions to various types of cycle-governing equations. This translates to estimates of the rate penalty incurred in shortening a code to eliminate cycles. Simulation results show that for the codes considered, shortening them to increase the girth can lead to significant gains in signal-to-noise ratio in the case of communication over an additive white Gaussian noise channel.Comment: 16 pages; 8 figures; to appear in IEEE Transactions on Information Theory, Aug 200

    On the Combinatorial Version of the Slepian-Wolf Problem

    Full text link
    We study the following combinatorial version of the Slepian-Wolf coding scheme. Two isolated Senders are given binary strings XX and YY respectively; the length of each string is equal to nn, and the Hamming distance between the strings is at most αn\alpha n. The Senders compress their strings and communicate the results to the Receiver. Then the Receiver must reconstruct both strings XX and YY. The aim is to minimise the lengths of the transmitted messages. For an asymmetric variant of this problem (where one of the Senders transmits the input string to the Receiver without compression) with deterministic encoding a nontrivial lower bound was found by A.Orlitsky and K.Viswanathany. In our paper we prove a new lower bound for the schemes with syndrome coding, where at least one of the Senders uses linear encoding of the input string. For the combinatorial Slepian-Wolf problem with randomized encoding the theoretical optimum of communication complexity was recently found by the first author, though effective protocols with optimal lengths of messages remained unknown. We close this gap and present a polynomial time randomized protocol that achieves the optimal communication complexity.Comment: 20 pages, 14 figures. Accepted to IEEE Transactions on Information Theory (June 2018
    corecore