4,573 research outputs found

    Mathematical Programming Decoding of Binary Linear Codes: Theory and Algorithms

    Full text link
    Mathematical programming is a branch of applied mathematics and has recently been used to derive new decoding approaches, challenging established but often heuristic algorithms based on iterative message passing. Concepts from mathematical programming used in the context of decoding include linear, integer, and nonlinear programming, network flows, notions of duality as well as matroid and polyhedral theory. This survey article reviews and categorizes decoding methods based on mathematical programming approaches for binary linear codes over binary-input memoryless symmetric channels.Comment: 17 pages, submitted to the IEEE Transactions on Information Theory. Published July 201

    Simultaneous Code/Error-Trellis Reduction for Convolutional Codes Using Shifted Code/Error-Subsequences

    Full text link
    In this paper, we show that the code-trellis and the error-trellis for a convolutional code can be reduced simultaneously, if reduction is possible. Assume that the error-trellis can be reduced using shifted error-subsequences. In this case, if the identical shifts occur in the subsequences of each code path, then the code-trellis can also be reduced. First, we obtain pairs of transformations which generate the identical shifts both in the subsequences of the code-path and in those of the error-path. Next, by applying these transformations to the generator matrix and the parity-check matrix, we show that reduction of these matrices is accomplished simultaneously, if it is possible. Moreover, it is shown that the two associated trellises are also reduced simultaneously.Comment: 5 pages, submitted to the 2011 IEEE International Symposium on Information Theor

    Variations of the McEliece Cryptosystem

    Full text link
    Two variations of the McEliece cryptosystem are presented. The first one is based on a relaxation of the column permutation in the classical McEliece scrambling process. This is done in such a way that the Hamming weight of the error, added in the encryption process, can be controlled so that efficient decryption remains possible. The second variation is based on the use of spatially coupled moderate-density parity-check codes as secret codes. These codes are known for their excellent error-correction performance and allow for a relatively low key size in the cryptosystem. For both variants the security with respect to known attacks is discussed

    Convolutional and tail-biting quantum error-correcting codes

    Full text link
    Rate-(n-2)/n unrestricted and CSS-type quantum convolutional codes with up to 4096 states and minimum distances up to 10 are constructed as stabilizer codes from classical self-orthogonal rate-1/n F_4-linear and binary linear convolutional codes, respectively. These codes generally have higher rate and less decoding complexity than comparable quantum block codes or previous quantum convolutional codes. Rate-(n-2)/n block stabilizer codes with the same rate and error-correction capability and essentially the same decoding algorithms are derived from these convolutional codes via tail-biting.Comment: 30 pages. Submitted to IEEE Transactions on Information Theory. Minor revisions after first round of review

    Distance Properties of Short LDPC Codes and their Impact on the BP, ML and Near-ML Decoding Performance

    Full text link
    Parameters of LDPC codes, such as minimum distance, stopping distance, stopping redundancy, girth of the Tanner graph, and their influence on the frame error rate performance of the BP, ML and near-ML decoding over a BEC and an AWGN channel are studied. Both random and structured LDPC codes are considered. In particular, the BP decoding is applied to the code parity-check matrices with an increasing number of redundant rows, and the convergence of the performance to that of the ML decoding is analyzed. A comparison of the simulated BP, ML, and near-ML performance with the improved theoretical bounds on the error probability based on the exact weight spectrum coefficients and the exact stopping size spectrum coefficients is presented. It is observed that decoding performance very close to the ML decoding performance can be achieved with a relatively small number of redundant rows for some codes, for both the BEC and the AWGN channels
    • 

    corecore