27 research outputs found
Performance Evaluation of Channel Decoding With Deep Neural Networks
With the demand of high data rate and low latency in fifth generation (5G),
deep neural network decoder (NND) has become a promising candidate due to its
capability of one-shot decoding and parallel computing. In this paper, three
types of NND, i.e., multi-layer perceptron (MLP), convolution neural network
(CNN) and recurrent neural network (RNN), are proposed with the same parameter
magnitude. The performance of these deep neural networks are evaluated through
extensive simulation. Numerical results show that RNN has the best decoding
performance, yet at the price of the highest computational overhead. Moreover,
we find there exists a saturation length for each type of neural network, which
is caused by their restricted learning abilities.Comment: 6 pages, 11 figures, Latex; typos corrected; IEEE ICC 2018 to appea
Non-Linear Digital Self-Interference Cancellation for In-Band Full-Duplex Radios Using Neural Networks
Full-duplex systems require very strong self-interference cancellation in
order to operate correctly and a significant part of the self-interference
signal is due to non-linear effects created by various transceiver impairments.
As such, linear cancellation alone is usually not sufficient and sophisticated
non-linear cancellation algorithms have been proposed in the literature. In
this work, we investigate the use of a neural network as an alternative to the
traditional non-linear cancellation method that is based on polynomial basis
functions. Measurement results from a full-duplex testbed demonstrate that a
small and simple feed-forward neural network canceler works exceptionally well,
as it can match the performance of the polynomial non-linear canceler with
significantly lower computational complexity.Comment: Presented at the IEEE International Workshop on Signal Processing
Advances in Wireless Communications (SPAWC) 201
Near Maximum Likelihood Decoding with Deep Learning
A novel and efficient neural decoder algorithm is proposed. The proposed
decoder is based on the neural Belief Propagation algorithm and the
Automorphism Group. By combining neural belief propagation with permutations
from the Automorphism Group we achieve near maximum likelihood performance for
High Density Parity Check codes. Moreover, the proposed decoder significantly
improves the decoding complexity, compared to our earlier work on the topic. We
also investigate the training process and show how it can be accelerated.
Simulations of the hessian and the condition number show why the learning
process is accelerated. We demonstrate the decoding algorithm for various
linear block codes of length up to 63 bits.Comment: The paper will be presented at IZS 201
Rate Compatible LDPC Neural Decoding Network: A Multi-Task Learning Approach
Deep learning based decoding networks have shown significant improvement in
decoding LDPC codes, but the neural decoders are limited by rate-matching
operations such as puncturing or extending, thus needing to train multiple
decoders with different code rates for a variety of channel conditions. In this
correspondence, we propose a Multi-Task Learning based rate-compatible LDPC
ecoding network, which utilizes the structure of raptor-like LDPC codes and can
deal with multiple code rates. In the proposed network, different portions of
parameters are activated to deal with distinct code rates, which leads to
parameter sharing among tasks. Numerical experiments demonstrate the
effectiveness of the proposed method. Training the specially designed network
under multiple code rates makes the decoder compatible with multiple code rates
without sacrificing frame error rate performance