1,962 research outputs found

    Learned Belief-Propagation Decoding with Simple Scaling and SNR Adaptation

    Get PDF
    We consider the weighted belief-propagation (WBP) decoder recently proposed by Nachmani et al. where different weights are introduced for each Tanner graph edge and optimized using machine learning techniques. Our focus is on simple-scaling models that use the same weights across certain edges to reduce the storage and computational burden. The main contribution is to show that simple scaling with few parameters often achieves the same gain as the full parameterization. Moreover, several training improvements for WBP are proposed. For example, it is shown that minimizing average binary cross-entropy is suboptimal in general in terms of bit error rate (BER) and a new "soft-BER" loss is proposed which can lead to better performance. We also investigate parameter adapter networks (PANs) that learn the relation between the signal-to-noise ratio and the WBP parameters. As an example, for the (32,16) Reed-Muller code with a highly redundant parity-check matrix, training a PAN with soft-BER loss gives near-maximum-likelihood performance assuming simple scaling with only three parameters.Comment: 5 pages, 5 figures, submitted to ISIT 201

    Permutation Decoding and the Stopping Redundancy Hierarchy of Cyclic and Extended Cyclic Codes

    Full text link
    We introduce the notion of the stopping redundancy hierarchy of a linear block code as a measure of the trade-off between performance and complexity of iterative decoding for the binary erasure channel. We derive lower and upper bounds for the stopping redundancy hierarchy via Lovasz's Local Lemma and Bonferroni-type inequalities, and specialize them for codes with cyclic parity-check matrices. Based on the observed properties of parity-check matrices with good stopping redundancy characteristics, we develop a novel decoding technique, termed automorphism group decoding, that combines iterative message passing and permutation decoding. We also present bounds on the smallest number of permutations of an automorphism group decoder needed to correct any set of erasures up to a prescribed size. Simulation results demonstrate that for a large number of algebraic codes, the performance of the new decoding method is close to that of maximum likelihood decoding.Comment: 40 pages, 6 figures, 10 tables, submitted to IEEE Transactions on Information Theor

    On the Decoding Complexity of Cyclic Codes Up to the BCH Bound

    Full text link
    The standard algebraic decoding algorithm of cyclic codes [n,k,d][n,k,d] up to the BCH bound tt is very efficient and practical for relatively small nn while it becomes unpractical for large nn as its computational complexity is O(nt)O(nt). Aim of this paper is to show how to make this algebraic decoding computationally more efficient: in the case of binary codes, for example, the complexity of the syndrome computation drops from O(nt)O(nt) to O(tn)O(t\sqrt n), and that of the error location from O(nt)O(nt) to at most max{O(tn),O(t2log(t)log(n))}\max \{O(t\sqrt n), O(t^2\log(t)\log(n))\}.Comment: accepted for publication in Proceedings ISIT 2011. IEEE copyrigh

    Decoding Cyclic Codes up to a New Bound on the Minimum Distance

    Full text link
    A new lower bound on the minimum distance of q-ary cyclic codes is proposed. This bound improves upon the Bose-Chaudhuri-Hocquenghem (BCH) bound and, for some codes, upon the Hartmann-Tzeng (HT) bound. Several Boston bounds are special cases of our bound. For some classes of codes the bound on the minimum distance is refined. Furthermore, a quadratic-time decoding algorithm up to this new bound is developed. The determination of the error locations is based on the Euclidean Algorithm and a modified Chien search. The error evaluation is done by solving a generalization of Forney's formula

    Rewriting Flash Memories by Message Passing

    Get PDF
    This paper constructs WOM codes that combine rewriting and error correction for mitigating the reliability and the endurance problems in flash memory. We consider a rewriting model that is of practical interest to flash applications where only the second write uses WOM codes. Our WOM code construction is based on binary erasure quantization with LDGM codes, where the rewriting uses message passing and has potential to share the efficient hardware implementations with LDPC codes in practice. We show that the coding scheme achieves the capacity of the rewriting model. Extensive simulations show that the rewriting performance of our scheme compares favorably with that of polar WOM code in the rate region where high rewriting success probability is desired. We further augment our coding schemes with error correction capability. By drawing a connection to the conjugate code pairs studied in the context of quantum error correction, we develop a general framework for constructing error-correction WOM codes. Under this framework, we give an explicit construction of WOM codes whose codewords are contained in BCH codes.Comment: Submitted to ISIT 201
    corecore