3,854 research outputs found

    Two Theorems in List Decoding

    Full text link
    We prove the following results concerning the list decoding of error-correcting codes: (i) We show that for \textit{any} code with a relative distance of δ\delta (over a large enough alphabet), the following result holds for \textit{random errors}: With high probability, for a \rho\le \delta -\eps fraction of random errors (for any \eps>0), the received word will have only the transmitted codeword in a Hamming ball of radius ρ\rho around it. Thus, for random errors, one can correct twice the number of errors uniquely correctable from worst-case errors for any code. A variant of our result also gives a simple algorithm to decode Reed-Solomon codes from random errors that, to the best of our knowledge, runs faster than known algorithms for certain ranges of parameters. (ii) We show that concatenated codes can achieve the list decoding capacity for erasures. A similar result for worst-case errors was proven by Guruswami and Rudra (SODA 08), although their result does not directly imply our result. Our results show that a subset of the random ensemble of codes considered by Guruswami and Rudra also achieve the list decoding capacity for erasures. Our proofs employ simple counting and probabilistic arguments.Comment: 19 pages, 0 figure

    On the Construction and Decoding of Concatenated Polar Codes

    Full text link
    A scheme for concatenating the recently invented polar codes with interleaved block codes is considered. By concatenating binary polar codes with interleaved Reed-Solomon codes, we prove that the proposed concatenation scheme captures the capacity-achieving property of polar codes, while having a significantly better error-decay rate. We show that for any ϵ>0\epsilon > 0, and total frame length NN, the parameters of the scheme can be set such that the frame error probability is less than 2N1ϵ2^{-N^{1-\epsilon}}, while the scheme is still capacity achieving. This improves upon 2^{-N^{0.5-\eps}}, the frame error probability of Arikan's polar codes. We also propose decoding algorithms for concatenated polar codes, which significantly improve the error-rate performance at finite block lengths while preserving the low decoding complexity

    Concatenated Polar Codes and Joint Source-Channel Decoding

    Get PDF
    In this dissertation, we mainly address two issues: 1. improving the finite-length performance of capacity-achieving polar codes; 2. use polar codes to efficiently exploit the source redundancy to improve the reliability of the data storage system. In the first part of the dissertation, we propose interleaved concatenation schemes of polar codes with outer binary BCH and convolutional codes to improve the finite-length performance of polar codes. For asymptotically long blocklength, we show that our schemes achieve exponential error decay rate which is much larger than the sub-exponential decay rate of standalone polar codes. In practice we show by simulation that our schemes outperform stand-alone polar codes decoded with successive cancellation or belief propagation decoding. The performance of concatenated polar and convolutional codes can be comparable to stand-alone polar codes with list decoding in the high signal to noise ratio regime. In addition to this, we show that the proposed concatenation schemes require lower memory and decoding complexity in comparison to belief propagation and list decoding of polar codes. With the proposed schemes, polar codes are able to strike a good balance between performance, memory and decoding complexity. The second part of the dissertation is devoted to improving the decoding performance of polar codes where there is leftover redundancy after source compression. We focus on language-based sources, and propose a joint-source channel decoding scheme for polar codes. We show that if the language decoder is modeled as erasure correcting outer block codes, the rate of inner polar codes can be improved while still guaranteeing a vanishing probability of error. The improved rate depends on the frozen bit distribution of polar codes and we provide a formal proof for the convergence of that distribution. Both lower bound and maximum improved rate analysis are provided. To compare with the non-iterative joint list decoding scheme for polar codes, we study a joint iterative decoding scheme with graph codes. In particular, irregular repeat accumulate codes are exploited because of low encoding/decoding complexity and capacity achieving property for the binary erasure channel. We propose how to design optimal irregular repeat accumulate codes with different models of language decoder. We show that our scheme achieves improved decoding thresholds. A comparison of joint polar decoding and joint irregular repeat accumulate decoding is given
    corecore