6 research outputs found
Learning End-to-End Channel Coding with Diffusion Models
It is a known problem that deep-learning-based end-to-end (E2E) channel
coding systems depend on a known and differentiable channel model, due to the
learning process and based on the gradient-descent optimization methods. This
places the challenge to approximate or generate the channel or its derivative
from samples generated by pilot signaling in real-world scenarios. Currently,
there are two prevalent methods to solve this problem. One is to generate the
channel via a generative adversarial network (GAN), and the other is to, in
essence, approximate the gradient via reinforcement learning methods. Other
methods include using score-based methods, variational autoencoders, or
mutual-information-based methods. In this paper, we focus on generative models
and, in particular, on a new promising method called diffusion models, which
have shown a higher quality of generation in image-based tasks. We will show
that diffusion models can be used in wireless E2E scenarios and that they work
as good as Wasserstein GANs while having a more stable training procedure and a
better generalization ability in testing.Comment: 6 pages, WSA/SCC 202
Concatenated Classic and Neural (CCN) Codes: ConcatenatedAE
Small neural networks (NNs) used for error correction were shown to improve
on classic channel codes and to address channel model changes. We extend the
code dimension of any such structure by using the same NN under one-hot
encoding multiple times, then serially-concatenated with an outer classic code.
We design NNs with the same network parameters, where each Reed-Solomon
codeword symbol is an input to a different NN. Significant improvements in
block error probabilities for an additive Gaussian noise channel as compared to
the small neural code are illustrated, as well as robustness to channel model
changes.Comment: 6 pages, IEEE WCNC 202