8 research outputs found
What Can Machine Learning Teach Us about Communications?
Rapid improvements in machine learning over the past decade are beginning to
have far-reaching effects. For communications, engineers with limited domain
expertise can now use off-the-shelf learning packages to design
high-performance systems based on simulations. Prior to the current revolution
in machine learning, the majority of communication engineers were quite aware
that system parameters (such as filter coefficients) could be learned using
stochastic gradient descent. It was not at all clear, however, that more
complicated parts of the system architecture could be learned as well. In this
paper, we discuss the application of machine-learning techniques to two
communications problems and focus on what can be learned from the resulting
systems. We were pleasantly surprised that the observed gains in one example
have a simple explanation that only became clear in hindsight. In essence, deep
learning discovered a simple and effective strategy that had not been
considered earlier.Comment: 5 pages, 4 figures, paper presented at ITW 2018, corrected version
and updated reference lis
What Can Machine Learning Teach Us about Communications
Rapid improvements in machine learning over the past decade are beginning to have far-reaching effects. For communications, engineers with limited domain expertise can now use off-the-shelf learning packages to design high-performance systems based on simulations. Prior to the current revolution in machine learning, the majority of communication engineers were quite aware that system parameters (such as filter coefficients) could be learned using stochastic gradient descent. It was not at all clear, however, that more complicated parts of the system architecture could be learned as well.In this paper, we discuss the application of machine-learning techniques to two communications problems and focus on what can be learned from the resulting systems. We were pleasantly surprised that the observed gains in one example have a simple explanation that only became clear in hindsight. In essence, deep learning discovered a simple and effective strategy that had not been considered earlier
Learned Belief-Propagation Decoding with Simple Scaling and SNR Adaptation
We consider the weighted belief-propagation (WBP) decoder recently proposed
by Nachmani et al. where different weights are introduced for each Tanner graph
edge and optimized using machine learning techniques. Our focus is on
simple-scaling models that use the same weights across certain edges to reduce
the storage and computational burden. The main contribution is to show that
simple scaling with few parameters often achieves the same gain as the full
parameterization. Moreover, several training improvements for WBP are proposed.
For example, it is shown that minimizing average binary cross-entropy is
suboptimal in general in terms of bit error rate (BER) and a new "soft-BER"
loss is proposed which can lead to better performance. We also investigate
parameter adapter networks (PANs) that learn the relation between the
signal-to-noise ratio and the WBP parameters. As an example, for the (32,16)
Reed-Muller code with a highly redundant parity-check matrix, training a PAN
with soft-BER loss gives near-maximum-likelihood performance assuming simple
scaling with only three parameters.Comment: 5 pages, 5 figures, submitted to ISIT 201
Model-Based Machine Learning for Joint Digital Backpropagation and PMD Compensation
We propose a model-based machine-learning approach for
polarization-multiplexed systems by parameterizing the split-step method for
the Manakov-PMD equation. This approach performs hardware-friendly DBP and
distributed PMD compensation with performance close to the PMD-free case.Comment: 3 pages, 2 figure
Revisiting Efficient Multi-Step Nonlinearity Compensation with Machine Learning: An Experimental Demonstration
Efficient nonlinearity compensation in fiber-optic communication systems is
considered a key element to go beyond the "capacity crunch''. One guiding
principle for previous work on the design of practical nonlinearity
compensation schemes is that fewer steps lead to better systems. In this paper,
we challenge this assumption and show how to carefully design multi-step
approaches that provide better performance--complexity trade-offs than their
few-step counterparts. We consider the recently proposed learned digital
backpropagation (LDBP) approach, where the linear steps in the split-step
method are re-interpreted as general linear functions, similar to the weight
matrices in a deep neural network. Our main contribution lies in an
experimental demonstration of this approach for a 25 Gbaud single-channel
optical transmission system. It is shown how LDBP can be integrated into a
coherent receiver DSP chain and successfully trained in the presence of various
hardware impairments. Our results show that LDBP with limited complexity can
achieve better performance than standard DBP by using very short, but jointly
optimized, finite-impulse response filters in each step. This paper also
provides an overview of recently proposed extensions of LDBP and we comment on
potentially interesting avenues for future work.Comment: 10 pages, 5 figures. Author version of a paper published in the
Journal of Lightwave Technology. OSA/IEEE copyright may appl
Model-Based Machine Learning for Joint Digital Backpropagation and PMD Compensation
In this paper, we propose a model-based machine-learning approach for
dual-polarization systems by parameterizing the split-step Fourier method for
the Manakov-PMD equation. The resulting method combines hardware-friendly
time-domain nonlinearity mitigation via the recently proposed learned digital
backpropagation (LDBP) with distributed compensation of polarization-mode
dispersion (PMD). We refer to the resulting approach as LDBP-PMD. We train
LDBP-PMD on multiple PMD realizations and show that it converges within 1% of
its peak dB performance after 428 training iterations on average, yielding a
peak effective signal-to-noise ratio of only 0.30 dB below the PMD-free case.
Similar to state-of-the-art lumped PMD compensation algorithms in practical
systems, our approach does not assume any knowledge about the particular PMD
realization along the link, nor any knowledge about the total accumulated PMD.
This is a significant improvement compared to prior work on distributed PMD
compensation, where knowledge about the accumulated PMD is typically assumed.
We also compare different parameterization choices in terms of performance,
complexity, and convergence behavior. Lastly, we demonstrate that the learned
models can be successfully retrained after an abrupt change of the PMD
realization along the fiber.Comment: 10 pages, 11 figures, to appear in the IEEE/OSA Journal of Lightwave
Technolog