661 research outputs found

    A Gated Hypernet Decoder for Polar Codes

    Full text link
    Hypernetworks were recently shown to improve the performance of message passing algorithms for decoding error correcting codes. In this work, we demonstrate how hypernetworks can be applied to decode polar codes by employing a new formalization of the polar belief propagation decoding scheme. We demonstrate that our method improves the previous results of neural polar decoders and achieves, for large SNRs, the same bit-error-rate performances as the successive list cancellation method, which is known to be better than any belief propagation decoders and very close to the maximum likelihood decoder.Comment: Accepted to ICASSP 202

    Learning to Denoise and Decode: A Novel Residual Neural Network Decoder for Polar Codes

    Full text link
    Polar codes have been adopted as the control channel coding scheme in the fifth generation new radio (5G NR) standard due to its capacity-achievable property. Traditional polar decoding algorithms such as successive cancellation (SC) suffer from high latency problem because of their sequential decoding nature. Neural network decoder (NND) has been proved to be a candidate for polar decoder since it is capable of oneshot decoding and parallel computing. Whereas, the bit-errorrate (BER) performance of NND is still inferior to that of SC algorithm. In this paper, we propose a residual neural network decoder (RNND) for polar codes. Different from previous works which directly use neural network for decoding symbols received from the channel, the proposed RNND introduces a denoising module based on residual learning before NND. The proposed residual learning denoiser is able to remove remarkable amount of noise from received signals. Numerical results show that our proposed RNND outperforms traditional NND with regard to the BER performance under comparable latency.Comment: 6 pages, 9 figure

    Convolutional Neural Network-aided Bit-flipping for Belief Propagation Decoding of Polar Codes

    Full text link
    Known for their capacity-achieving abilities, polar codes have been selected as the control channel coding scheme for 5G communications. To satisfy the needs of high throughput and low latency, belief propagation (BP) is chosen as the decoding algorithm. However, in general, the error performance of BP is worse than that of enhanced successive cancellation (SC). Recently, critical-set bit-flipping (CS-BF) is applied to BP decoding to lower the error rate. However, its trial and error process result in even longer latency. In this work, we propose a convolutional neural network-assisted bit-flipping (CNN-BF) mechanism to further enhance BP decoding of polar codes. With carefully designed input data and model architecture, our proposed CNN-BF can achieve much higher prediction accuracy and better error correction capability than CS-BF but with only half latency. It also achieves a lower block error rate (BLER) than SC list (CA-SCL).Comment: 5 pages, 6 figure

    Learning to Flip Successive Cancellation Decoding of Polar Codes with LSTM Networks

    Full text link
    The key to successive cancellation (SC) flip decoding of polar codes is to accurately identify the first error bit. The optimal flipping strategy is considered difficult due to lack of an analytical solution. Alternatively, we propose a deep learning aided SC flip algorithm. Specifically, before each SC decoding attempt, a long short-term memory (LSTM) network is exploited to either (i) locate the first error bit, or (ii) undo a previous `wrong' flip. In each SC attempt, the sequence of log likelihood ratios (LLRs) derived in the previous SC attempt is exploited to decide which action to take. Accordingly, a two-stage training method of the LSTM network is proposed, i.e., learn to locate first error bits in the first stage, and then to undo `wrong' flips in the second stage. Simulation results show that the proposed approach identifies error bits more accurately and achieves better performance than the state-of-the-art SC flip algorithms.Comment: 5 pages, 7 figure

    Low-complexity Recurrent Neural Network-based Polar Decoder with Weight Quantization Mechanism

    Full text link
    Polar codes have drawn much attention and been adopted in 5G New Radio (NR) due to their capacity-achieving performance. Recently, as the emerging deep learning (DL) technique has breakthrough achievements in many fields, neural network decoder was proposed to obtain faster convergence and better performance than belief propagation (BP) decoding. However, neural networks are memory-intensive and hinder the deployment of DL in communication systems. In this work, a low-complexity recurrent neural network (RNN) polar decoder with codebook-based weight quantization is proposed. Our test results show that we can effectively reduce the memory overhead by 98% and alleviate computational complexity with slight performance loss.Comment: 5 pages, accepted by the 2019 International Conference on Acoustics, Speech, and Signal Processing (ICASSP

    DNC-Aided SCL-Flip Decoding of Polar Codes

    Full text link
    Successive-cancellation list (SCL) decoding of polar codes has been adopted for 5G. However, the performance is not very satisfactory with moderate code length. Heuristic or deep-learning-aided (DL-aided) flip algorithms have been developed to tackle this problem. The key for successful flip decoding is to accurately identify error bit positions. In this work, we propose a new flip algorithm with help of differentiable neural computer (DNC). New state and action encoding are developed for better DNC training and inference efficiency. The proposed method consists of two phases: i) a flip DNC (F-DNC) is exploited to rank most likely flip positions for multi-bit flipping; ii) if decoding still fails, a flip-validate DNC (FV-DNC) is used to re-select error bit positions for successive flip decoding trials. Supervised training methods are designed accordingly for the two DNCs. Simulation results show that proposed DNC-aided SCL-Flip (DNC-SCLF) decoding demonstrates up to 0.34dB coding gain improvement or 54.2 reduction in average number of decoding attempts compared to prior works.Comment: Submitted to Globecom 202

    Convolutional Polar Codes

    Full text link
    Arikan's Polar codes attracted much attention as the first efficiently decodable and capacity achieving codes. Furthermore, Polar codes exhibit an exponentially decreasing block error probability with an asymptotic error exponent upper bounded by 1/2. Since their discovery, many attempts have been made to improve the error exponent and the finite block-length performance, while keeping the bloc-structured kernel. Recently, two of us introduced a new family of efficiently decodable error-correction codes based on a recently discovered efficiently-contractible tensor network family in quantum many-body physics, called branching MERA. These codes, called branching MERA codes, include Polar codes and also extend them in a non-trivial way by substituting the bloc-structured kernel by a convolutional structure. Here, we perform an in-depth study of a particular example that can be thought of as a direct extension to Arikan's Polar code, which we therefore name Convolutional Polar codes. We prove that these codes polarize and exponentially suppress the channel's error probability, with an asymptotic error exponent log_2(3)/2 which is provably better than for Polar codes under successive cancellation decoding. We also perform finite block-size numerical simulations which display improved error-correcting capability with only a minor impact on decoding complexity.Comment: Subsumes arXiv:1312.457

    Reinforcement Learning for Nested Polar Code Construction

    Full text link
    In this paper, we model nested polar code construction as a Markov decision process (MDP), and tackle it with advanced reinforcement learning (RL) techniques. First, an MDP environment with state, action, and reward is defined in the context of polar coding. Specifically, a state represents the construction of an (N,K)(N,K) polar code, an action specifies its reduction to an (N,K−1)(N,K-1) subcode, and reward is the decoding performance. A neural network architecture consisting of both policy and value networks is proposed to generate actions based on the observed states, aiming at maximizing the overall rewards. A loss function is defined to trade off between exploitation and exploration. To further improve learning efficiency and quality, an `integrated learning' paradigm is proposed. It first employs a genetic algorithm to generate a population of (sub-)optimal polar codes for each (N,K)(N,K), and then uses them as prior knowledge to refine the policy in RL. Such a paradigm is shown to accelerate the training process, and converge at better performances. Simulation results show that the proposed learning-based polar constructions achieve comparable, or even better, performances than the state of the art under successive cancellation list (SCL) decoders. Last but not least, this is achieved without exploiting any expert knowledge from polar coding theory in the learning algorithms.Comment: 8 pages, 10 figures, propose a multi-stage genetic algorith

    Neural Belief Propagation Decoding of CRC-Polar Concatenated Codes

    Full text link
    Polar codes are the first class of error correcting codes that provably achieve the channel capacity at infinite code length. They were selected for use in the fifth generation of cellular mobile communications (5G). In practical scenarios such as 5G, a cyclic redundancy check (CRC) is concatenated with polar codes to improve their finite length performance. This is mostly beneficial for sequential successive-cancellation list decoders. However, for parallel iterative belief propagation (BP) decoders, CRC is only used as an early stopping criterion with incremental error-correction performance improvement. In this paper, we first propose a CRC-polar BP (CPBP) decoder by exchanging the extrinsic information between the factor graph of the polar code and that of the CRC. We then propose a neural CPBP (NCPBP) algorithm which improves the CPBP decoder by introducing trainable normalizing weights on the concatenated factor graph. Our results on a 5G polar code of length 128 show that at the frame error rate of 10^(-5) and with a maximum of 30 iterations, the error-correction performance of CPBP and NCPBP are approximately 0.25 dB and 0.5 dB better than that of the conventional CRC-aided BP decoder, respectively, while introducing almost no latency overhead

    Low-Complexity LSTM-Assisted Bit-Flipping Algorithm for Successive Cancellation List Polar Decoder

    Full text link
    Polar codes have attracted much attention in the past decade due to their capacity-achieving performance. The higher decoding capacity is required for 5G and beyond 5G (B5G). Although the cyclic redundancy check (CRC)- assisted successive cancellation list bit-flipping (CA-SCLF) decoders have been developed to obtain a better performance, the solution to error bit correction (bit-flipping) problem is still imperfect and hard to design. In this work, we leverage the expert knowledge in communication systems and adopt deep learning (DL) technique to obtain the better solution. A low-complexity long short-term memory network (LSTM)-assisted CA-SCLF decoder is proposed to further improve the performance of conventional CA-SCLF and avoid complexity and memory overhead. Our test results show that we can effectively improve the BLER performance by 0.11dB compared to prior work and reduce the complexity and memory overhead by over 30% of the network.Comment: 5 pages, 5 figure
    • …
    corecore