2 research outputs found
Deep Learning-Based Quantization of L-Values for Gray-Coded Modulation
In this work, a deep learning-based quantization scheme for log-likelihood
ratio (L-value) storage is introduced. We analyze the dependency between the
average magnitude of different L-values from the same quadrature amplitude
modulation (QAM) symbol and show they follow a consistent ordering. Based on
this we design a deep autoencoder that jointly compresses and separately
reconstructs each L-value, allowing the use of a weighted loss function that
aims to more accurately reconstructs low magnitude inputs. Our method is shown
to be competitive with state-of-the-art maximum mutual information quantization
schemes, reducing the required memory footprint by a ratio of up to two and a
loss of performance smaller than 0.1 dB with less than two effective bits per
L-value or smaller than 0.04 dB with 2.25 effective bits. We experimentally
show that our proposed method is a universal compression scheme in the sense
that after training on an LDPC-coded Rayleigh fading scenario we can reuse the
same network without further training on other channel models and codes while
preserving the same performance benefits.Comment: Submitted to IEEE Globecom 201
Deep Log-Likelihood Ratio Quantization
In this work, a deep learning-based method for log-likelihood ratio (LLR)
lossy compression and quantization is proposed, with emphasis on a single-input
single-output uncorrelated fading communication setting. A deep autoencoder
network is trained to compress, quantize and reconstruct the bit log-likelihood
ratios corresponding to a single transmitted symbol. Specifically, the encoder
maps to a latent space with dimension equal to the number of sufficient
statistics required to recover the inputs - equal to three in this case - while
the decoder aims to reconstruct a noisy version of the latent representation
with the purpose of modeling quantization effects in a differentiable way.
Simulation results show that, when applied to a standard rate-1/2 low-density
parity-check (LDPC) code, a finite precision compression factor of nearly three
times is achieved when storing an entire codeword, with an incurred loss of
performance lower than 0.1 dB compared to straightforward scalar quantization
of the log-likelihood ratios.Comment: Accepted for publication at EUSIPCO 2019. Camera-ready versio