19,958 research outputs found

    Finite-Length Scaling of Polar Codes

    Full text link
    Consider a binary-input memoryless output-symmetric channel WW. Such a channel has a capacity, call it I(W)I(W), and for any R<I(W)R<I(W) and strictly positive constant PeP_{\rm e} we know that we can construct a coding scheme that allows transmission at rate RR with an error probability not exceeding PeP_{\rm e}. Assume now that we let the rate RR tend to I(W)I(W) and we ask how we have to "scale" the blocklength NN in order to keep the error probability fixed to PeP_{\rm e}. We refer to this as the "finite-length scaling" behavior. This question was addressed by Strassen as well as Polyanskiy, Poor and Verdu, and the result is that NN must grow at least as the square of the reciprocal of I(W)−RI(W)-R. Polar codes are optimal in the sense that they achieve capacity. In this paper, we are asking to what degree they are also optimal in terms of their finite-length behavior. Our approach is based on analyzing the dynamics of the un-polarized channels. The main results of this paper can be summarized as follows. Consider the sum of Bhattacharyya parameters of sub-channels chosen (by the polar coding scheme) to transmit information. If we require this sum to be smaller than a given value Pe>0P_{\rm e}>0, then the required block-length NN scales in terms of the rate R<I(W)R < I(W) as N≥α(I(W)−R)μ‾N \geq \frac{\alpha}{(I(W)-R)^{\underline{\mu}}}, where α\alpha is a positive constant that depends on PeP_{\rm e} and I(W)I(W), and μ‾=3.579\underline{\mu} = 3.579. Also, we show that with the same requirement on the sum of Bhattacharyya parameters, the block-length scales in terms of the rate like N≤β(I(W)−R)μ‾N \leq \frac{\beta}{(I(W)-R)^{\overline{\mu}}}, where β\beta is a constant that depends on PeP_{\rm e} and I(W)I(W), and μ‾=6\overline{\mu}=6.Comment: In IEEE Transactions on Information Theory, 201

    On the Finite Length Scaling of Ternary Polar Codes

    Full text link
    The polarization process of polar codes over a ternary alphabet is studied. Recently it has been shown that the scaling of the blocklength of polar codes with prime alphabet size scales polynomially with respect to the inverse of the gap between code rate and channel capacity. However, except for the binary case, the degree of the polynomial in the bound is extremely large. In this work, it is shown that a much lower degree polynomial can be computed numerically for the ternary case. Similar results are conjectured for the general case of prime alphabet size.Comment: Submitted to ISIT 201

    From Polar to Reed-Muller Codes: a Technique to Improve the Finite-Length Performance

    Full text link
    We explore the relationship between polar and RM codes and we describe a coding scheme which improves upon the performance of the standard polar code at practical block lengths. Our starting point is the experimental observation that RM codes have a smaller error probability than polar codes under MAP decoding. This motivates us to introduce a family of codes that "interpolates" between RM and polar codes, call this family Cinter={Cα:α∈[0,1]}{\mathcal C}_{\rm inter} = \{C_{\alpha} : \alpha \in [0, 1]\}, where Cα∣α=1C_{\alpha} \big |_{\alpha = 1} is the original polar code, and Cα∣α=0C_{\alpha} \big |_{\alpha = 0} is an RM code. Based on numerical observations, we remark that the error probability under MAP decoding is an increasing function of α\alpha. MAP decoding has in general exponential complexity, but empirically the performance of polar codes at finite block lengths is boosted by moving along the family Cinter{\mathcal C}_{\rm inter} even under low-complexity decoding schemes such as, for instance, belief propagation or successive cancellation list decoder. We demonstrate the performance gain via numerical simulations for transmission over the erasure channel as well as the Gaussian channel.Comment: 8 pages, 7 figures, in IEEE Transactions on Communications, 2014 and in ISIT'1

    How to Achieve the Capacity of Asymmetric Channels

    Full text link
    We survey coding techniques that enable reliable transmission at rates that approach the capacity of an arbitrary discrete memoryless channel. In particular, we take the point of view of modern coding theory and discuss how recent advances in coding for symmetric channels help provide more efficient solutions for the asymmetric case. We consider, in more detail, three basic coding paradigms. The first one is Gallager's scheme that consists of concatenating a linear code with a non-linear mapping so that the input distribution can be appropriately shaped. We explicitly show that both polar codes and spatially coupled codes can be employed in this scenario. Furthermore, we derive a scaling law between the gap to capacity, the cardinality of the input and output alphabets, and the required size of the mapper. The second one is an integrated scheme in which the code is used both for source coding, in order to create codewords distributed according to the capacity-achieving input distribution, and for channel coding, in order to provide error protection. Such a technique has been recently introduced by Honda and Yamamoto in the context of polar codes, and we show how to apply it also to the design of sparse graph codes. The third paradigm is based on an idea of B\"ocherer and Mathar, and separates the two tasks of source coding and channel coding by a chaining construction that binds together several codewords. We present conditions for the source code and the channel code, and we describe how to combine any source code with any channel code that fulfill those conditions, in order to provide capacity-achieving schemes for asymmetric channels. In particular, we show that polar codes, spatially coupled codes, and homophonic codes are suitable as basic building blocks of the proposed coding strategy.Comment: 32 pages, 4 figures, presented in part at Allerton'14 and published in IEEE Trans. Inform. Theor

    Partitioned List Decoding of Polar Codes: Analysis and Improvement of Finite Length Performance

    Full text link
    Polar codes represent one of the major recent breakthroughs in coding theory and, because of their attractive features, they have been selected for the incoming 5G standard. As such, a lot of attention has been devoted to the development of decoding algorithms with good error performance and efficient hardware implementation. One of the leading candidates in this regard is represented by successive-cancellation list (SCL) decoding. However, its hardware implementation requires a large amount of memory. Recently, a partitioned SCL (PSCL) decoder has been proposed to significantly reduce the memory consumption. In this paper, we examine the paradigm of PSCL decoding from both theoretical and practical standpoints: (i) by changing the construction of the code, we are able to improve the performance at no additional computational, latency or memory cost, (ii) we present an optimal scheme to allocate cyclic redundancy checks (CRCs), and (iii) we provide an upper bound on the list size that allows MAP performance.Comment: 2017 IEEE Global Communications Conference (GLOBECOM
    • …
    corecore