241 research outputs found
Low-complexity Recurrent Neural Network-based Polar Decoder with Weight Quantization Mechanism
Polar codes have drawn much attention and been adopted in 5G New Radio (NR)
due to their capacity-achieving performance. Recently, as the emerging deep
learning (DL) technique has breakthrough achievements in many fields, neural
network decoder was proposed to obtain faster convergence and better
performance than belief propagation (BP) decoding. However, neural networks are
memory-intensive and hinder the deployment of DL in communication systems. In
this work, a low-complexity recurrent neural network (RNN) polar decoder with
codebook-based weight quantization is proposed. Our test results show that we
can effectively reduce the memory overhead by 98% and alleviate computational
complexity with slight performance loss.Comment: 5 pages, accepted by the 2019 International Conference on Acoustics,
Speech, and Signal Processing (ICASSP
Deepcode: Feedback Codes via Deep Learning
The design of codes for communicating reliably over a statistically well
defined channel is an important endeavor involving deep mathematical research
and wide-ranging practical applications. In this work, we present the first
family of codes obtained via deep learning, which significantly beats
state-of-the-art codes designed over several decades of research. The
communication channel under consideration is the Gaussian noise channel with
feedback, whose study was initiated by Shannon; feedback is known theoretically
to improve reliability of communication, but no practical codes that do so have
ever been successfully constructed.
We break this logjam by integrating information theoretic insights
harmoniously with recurrent-neural-network based encoders and decoders to
create novel codes that outperform known codes by 3 orders of magnitude in
reliability. We also demonstrate several desirable properties of the codes: (a)
generalization to larger block lengths, (b) composability with known codes, (c)
adaptation to practical constraints. This result also has broader ramifications
for coding theory: even when the channel has a clear mathematical model, deep
learning methodologies, when combined with channel-specific
information-theoretic insights, can potentially beat state-of-the-art codes
constructed over decades of mathematical research.Comment: 24 pages, 20 figure
A Gated Hypernet Decoder for Polar Codes
Hypernetworks were recently shown to improve the performance of message
passing algorithms for decoding error correcting codes. In this work, we
demonstrate how hypernetworks can be applied to decode polar codes by employing
a new formalization of the polar belief propagation decoding scheme. We
demonstrate that our method improves the previous results of neural polar
decoders and achieves, for large SNRs, the same bit-error-rate performances as
the successive list cancellation method, which is known to be better than any
belief propagation decoders and very close to the maximum likelihood decoder.Comment: Accepted to ICASSP 202
Convolutional Neural Network-aided Bit-flipping for Belief Propagation Decoding of Polar Codes
Known for their capacity-achieving abilities, polar codes have been selected
as the control channel coding scheme for 5G communications. To satisfy the
needs of high throughput and low latency, belief propagation (BP) is chosen as
the decoding algorithm. However, in general, the error performance of BP is
worse than that of enhanced successive cancellation (SC). Recently,
critical-set bit-flipping (CS-BF) is applied to BP decoding to lower the error
rate. However, its trial and error process result in even longer latency. In
this work, we propose a convolutional neural network-assisted bit-flipping
(CNN-BF) mechanism to further enhance BP decoding of polar codes. With
carefully designed input data and model architecture, our proposed CNN-BF can
achieve much higher prediction accuracy and better error correction capability
than CS-BF but with only half latency. It also achieves a lower block error
rate (BLER) than SC list (CA-SCL).Comment: 5 pages, 6 figure
Neural Network-based Equalizer by Utilizing Coding Gain in Advance
Recently, deep learning has been exploited in many fields with revolutionary
breakthroughs. In the light of this, deep learning-assisted communication
systems have also attracted much attention in recent years and have potential
to break down the conventional design rule for communication systems. In this
work, we propose two kinds of neural network-based equalizers to exploit
different characteristics between convolutional neural networks and recurrent
neural networks. The equalizer in conventional block-based design may destroy
the code structure and degrade the capacity of coding gain for decoder. On the
contrary, our proposed approach not only eliminates channel fading, but also
exploits the code structure with utilization of coding gain in advance, which
can effectively increase the overall utilization of coding gain with more than
1.5 dB gain.Comment: 5 pages, 4 figures, accepted by the 2019 Seventh IEEE Global
Conference on Signal and Information Processin
Unsupervised Learning for Neural Network-based Polar Decoder via Syndrome Loss
With the rapid growth of deep learning in many fields, machine
learning-assisted communication systems had attracted lots of researches with
many eye-catching initial results. At the present stage, most of the methods
still have great demand of massive labeled data for supervised learning.
However, obtaining labeled data in the practical applications is not feasible,
which may result in severe performance degradation due to channel variations.
To overcome such a constraint, syndrome loss has been proposed to penalize
non-valid decoded codewords and achieve unsupervised learning for neural
network-based decoder. However, it cannot be applied to polar decoder directly.
In this work, by exploiting the nature of polar codes, we propose a modified
syndrome loss. From simulation results, the proposed method demonstrates that
domain-specific knowledge and know-how in code structure can enable
unsupervised learning for neural network-based polar decoder.Comment: four pages, six figure
Low-Complexity LSTM-Assisted Bit-Flipping Algorithm for Successive Cancellation List Polar Decoder
Polar codes have attracted much attention in the past decade due to their
capacity-achieving performance. The higher decoding capacity is required for 5G
and beyond 5G (B5G). Although the cyclic redundancy check (CRC)- assisted
successive cancellation list bit-flipping (CA-SCLF) decoders have been
developed to obtain a better performance, the solution to error bit correction
(bit-flipping) problem is still imperfect and hard to design. In this work, we
leverage the expert knowledge in communication systems and adopt deep learning
(DL) technique to obtain the better solution. A low-complexity long short-term
memory network (LSTM)-assisted CA-SCLF decoder is proposed to further improve
the performance of conventional CA-SCLF and avoid complexity and memory
overhead. Our test results show that we can effectively improve the BLER
performance by 0.11dB compared to prior work and reduce the complexity and
memory overhead by over 30% of the network.Comment: 5 pages, 5 figure
Realizing Neural Decoder at the Edge with Ensembled BNN
In this work, we propose extreme compression techniques like binarization,
ternarization for Neural Decoders such as TurboAE. These methods reduce memory
and computation by a factor of 64 with a performance better than the quantized
(with 1-bit or 2-bits) Neural Decoders. However, because of the limited
representation capability of the Binary and Ternary networks, the performance
is not as good as the real-valued decoder. To fill this gap, we further propose
to ensemble 4 such weak performers to deploy in the edge to achieve a
performance similar to the real-valued network. These ensemble decoders give 16
and 64 times saving in memory and computation respectively and help to achieve
performance similar to real-valued TurboAE
Towards Hardware Implementation of Neural Network-based Communication Algorithms
There is a recent interest in neural network (NN)-based communication
algorithms which have shown to achieve (beyond) state-of-the-art performance
for a variety of problems or lead to reduced implementation complexity.
However, most work on this topic is simulation based and implementation on
specialized hardware for fast inference, such as field-programmable gate arrays
(FPGAs), is widely ignored. In particular for practical uses, NN weights should
be quantized and inference carried out by a fixed-point instead of
floating-point system, widely used in consumer class computers and graphics
processing units (GPUs). Moving to such representations enables higher
inference rates and complexity reductions, at the cost of precision loss. We
demonstrate that it is possible to implement NN-based algorithms in fixed-point
arithmetic with quantized weights at negligible performance loss and with
hardware complexity compatible with practical systems, such as FPGAs and
application-specific integrated circuits (ASICs)
Neural Network-Aided BCJR Algorithm for Joint Symbol Detection and Channel Decoding
Recently, deep learning-assisted communication systems have achieved many
eye-catching results and attracted more and more researchers in this emerging
field. Instead of completely replacing the functional blocks of communication
systems with neural networks, a hybrid manner of BCJRNet symbol detection is
proposed to combine the advantages of the BCJR algorithm and neural networks.
However, its separate block design not only degrades the system performance but
also results in additional hardware complexity. In this work, we propose a BCJR
receiver for joint symbol detection and channel decoding. It can simultaneously
utilize the trellis diagram and channel state information for a more accurate
calculation of branch probability and thus achieve global optimum with 2.3 dB
gain over separate block design. Furthermore, a dedicated neural network model
is proposed to replace the channel-model-based computation of the BCJR
receiver, which can avoid the requirements of perfect CSI and is more robust
under CSI uncertainty with 1.0 dB gain.Comment: 6 pages, six figures, accepted by 2020 IEEE International Workshop on
Signal Processing Systems (SiPS
- …