16 research outputs found

    From Polar to Reed-Muller Codes: a Technique to Improve the Finite-Length Performance

    Full text link
    We explore the relationship between polar and RM codes and we describe a coding scheme which improves upon the performance of the standard polar code at practical block lengths. Our starting point is the experimental observation that RM codes have a smaller error probability than polar codes under MAP decoding. This motivates us to introduce a family of codes that "interpolates" between RM and polar codes, call this family Cinter={Cα:α[0,1]}{\mathcal C}_{\rm inter} = \{C_{\alpha} : \alpha \in [0, 1]\}, where Cαα=1C_{\alpha} \big |_{\alpha = 1} is the original polar code, and Cαα=0C_{\alpha} \big |_{\alpha = 0} is an RM code. Based on numerical observations, we remark that the error probability under MAP decoding is an increasing function of α\alpha. MAP decoding has in general exponential complexity, but empirically the performance of polar codes at finite block lengths is boosted by moving along the family Cinter{\mathcal C}_{\rm inter} even under low-complexity decoding schemes such as, for instance, belief propagation or successive cancellation list decoder. We demonstrate the performance gain via numerical simulations for transmission over the erasure channel as well as the Gaussian channel.Comment: 8 pages, 7 figures, in IEEE Transactions on Communications, 2014 and in ISIT'1

    Successive Cancellation Inactivation Decoding for Modified Reed-Muller and eBCH Codes

    Full text link
    A successive cancellation (SC) decoder with inactivations is proposed as an efficient implementation of SC list (SCL) decoding over the binary erasure channel. The proposed decoder assigns a dummy variable to an information bit whenever it is erased during SC decoding and continues with decoding. Inactivated bits are resolved using information gathered from decoding frozen bits. This decoder leverages the structure of the Hadamard matrix, but can be applied to any linear code by representing it as a polar code with dynamic frozen bits. SCL decoders are partially characterized using density evolution to compute the average number of inactivations required to achieve the maximum a-posteriori decoding performance. The proposed measure quantifies the performance vs. complexity trade-off and provides new insight into dynamics of the number of paths in SCL decoding. The technique is applied to analyze Reed-Muller (RM) codes with dynamic frozen bits. It is shown that these modified RM codes perform close to extended BCH codes.Comment: Accepted at the 2020 ISI

    Sublinear Latency for Simplified Successive Cancellation Decoding of Polar Codes

    Full text link
    This work analyzes the latency of the simplified successive cancellation (SSC) decoding scheme for polar codes proposed by Alamdar-Yazdi and Kschischang. It is shown that, unlike conventional successive cancellation decoding, where latency is linear in the block length, the latency of SSC decoding is sublinear. More specifically, the latency of SSC decoding is O(N11/μ)O(N^{1-1/\mu}), where NN is the block length and μ\mu is the scaling exponent of the channel, which captures the speed of convergence of the rate to capacity. Numerical results demonstrate the tightness of the bound and show that most of the latency reduction arises from the parallel decoding of subcodes of rate 00 or 11.Comment: 20 pages, 6 figures, presented in part at ISIT 2020 and accepted in IEEE Transactions on Wireless Communication

    Parallelism versus Latency in Simplified Successive-Cancellation Decoding of Polar Codes

    Full text link
    This paper characterizes the latency of the simplified successive-cancellation (SSC) decoding scheme for polar codes under hardware resource constraints. In particular, when the number of processing elements PP that can perform SSC decoding operations in parallel is limited, as is the case in practice, the latency of SSC decoding is O(N11/μ+NPlog2log2NP)O\left(N^{1-1/\mu}+\frac{N}{P}\log_2\log_2\frac{N}{P}\right), where NN is the block length of the code and μ\mu is the scaling exponent of the channel. Three direct consequences of this bound are presented. First, in a fully-parallel implementation where P=N2P=\frac{N}{2}, the latency of SSC decoding is O(N11/μ)O\left(N^{1-1/\mu}\right), which is sublinear in the block length. This recovers a result from our earlier work. Second, in a fully-serial implementation where P=1P=1, the latency of SSC decoding scales as O(Nlog2log2N)O\left(N\log_2\log_2 N\right). The multiplicative constant is also calculated: we show that the latency of SSC decoding when P=1P=1 is given by (2+o(1))Nlog2log2N\left(2+o(1)\right) N\log_2\log_2 N. Third, in a semi-parallel implementation, the smallest PP that gives the same latency as that of the fully-parallel implementation is P=N1/μP=N^{1/\mu}. The tightness of our bound on SSC decoding latency and the applicability of the foregoing results is validated through extensive simulations
    corecore