38 research outputs found
ASIC Implementation of Time-Domain Digital Backpropagation with Deep-Learned Chromatic Dispersion Filters
We consider time-domain digital backpropagation with chromatic dispersion
filters jointly optimized and quantized using machine-learning techniques.
Compared to the baseline implementations, we show improved BER performance and
>40% power dissipation reductions in 28-nm CMOS.Comment: 3 pages, 3 figures, updated reference list, added one sentence in the
result section for clarit
Revisiting Multi-Step Nonlinearity Compensation with Machine Learning
For the efficient compensation of fiber nonlinearity, one of the guiding
principles appears to be: fewer steps are better and more efficient. We
challenge this assumption and show that carefully designed multi-step
approaches can lead to better performance-complexity trade-offs than their
few-step counterparts.Comment: 4 pages, 3 figures, This is a preprint of a paper submitted to the
2019 European Conference on Optical Communicatio
Revisiting Efficient Multi-Step Nonlinearity Compensation with Machine Learning: An Experimental Demonstration
Efficient nonlinearity compensation in fiber-optic communication systems is
considered a key element to go beyond the "capacity crunch''. One guiding
principle for previous work on the design of practical nonlinearity
compensation schemes is that fewer steps lead to better systems. In this paper,
we challenge this assumption and show how to carefully design multi-step
approaches that provide better performance--complexity trade-offs than their
few-step counterparts. We consider the recently proposed learned digital
backpropagation (LDBP) approach, where the linear steps in the split-step
method are re-interpreted as general linear functions, similar to the weight
matrices in a deep neural network. Our main contribution lies in an
experimental demonstration of this approach for a 25 Gbaud single-channel
optical transmission system. It is shown how LDBP can be integrated into a
coherent receiver DSP chain and successfully trained in the presence of various
hardware impairments. Our results show that LDBP with limited complexity can
achieve better performance than standard DBP by using very short, but jointly
optimized, finite-impulse response filters in each step. This paper also
provides an overview of recently proposed extensions of LDBP and we comment on
potentially interesting avenues for future work.Comment: 10 pages, 5 figures. Author version of a paper published in the
Journal of Lightwave Technology. OSA/IEEE copyright may appl
What Can Machine Learning Teach Us about Communications?
Rapid improvements in machine learning over the past decade are beginning to
have far-reaching effects. For communications, engineers with limited domain
expertise can now use off-the-shelf learning packages to design
high-performance systems based on simulations. Prior to the current revolution
in machine learning, the majority of communication engineers were quite aware
that system parameters (such as filter coefficients) could be learned using
stochastic gradient descent. It was not at all clear, however, that more
complicated parts of the system architecture could be learned as well. In this
paper, we discuss the application of machine-learning techniques to two
communications problems and focus on what can be learned from the resulting
systems. We were pleasantly surprised that the observed gains in one example
have a simple explanation that only became clear in hindsight. In essence, deep
learning discovered a simple and effective strategy that had not been
considered earlier.Comment: 5 pages, 4 figures, paper presented at ITW 2018, corrected version
and updated reference lis
Model-Based Machine Learning for Joint Digital Backpropagation and PMD Compensation
We propose a model-based machine-learning approach for
polarization-multiplexed systems by parameterizing the split-step method for
the Manakov-PMD equation. This approach performs hardware-friendly DBP and
distributed PMD compensation with performance close to the PMD-free case.Comment: 3 pages, 2 figure
What Can Machine Learning Teach Us about Communications
Rapid improvements in machine learning over the past decade are beginning to have far-reaching effects. For communications, engineers with limited domain expertise can now use off-the-shelf learning packages to design high-performance systems based on simulations. Prior to the current revolution in machine learning, the majority of communication engineers were quite aware that system parameters (such as filter coefficients) could be learned using stochastic gradient descent. It was not at all clear, however, that more complicated parts of the system architecture could be learned as well.In this paper, we discuss the application of machine-learning techniques to two communications problems and focus on what can be learned from the resulting systems. We were pleasantly surprised that the observed gains in one example have a simple explanation that only became clear in hindsight. In essence, deep learning discovered a simple and effective strategy that had not been considered earlier
FPGA Implementation of Multi-Layer Machine Learning Equalizer with On-Chip Training
We design and implement an adaptive machine learning equalizer that alternates multiple linear and nonlinear computational layers on an FPGA. On-chip training via gradient backpropagation is shown to allow for real-time adaptation to time-varying channel impairments
Model-Based Machine Learning for Joint Digital Backpropagation and PMD Compensation
In this paper, we propose a model-based machine-learning approach for
dual-polarization systems by parameterizing the split-step Fourier method for
the Manakov-PMD equation. The resulting method combines hardware-friendly
time-domain nonlinearity mitigation via the recently proposed learned digital
backpropagation (LDBP) with distributed compensation of polarization-mode
dispersion (PMD). We refer to the resulting approach as LDBP-PMD. We train
LDBP-PMD on multiple PMD realizations and show that it converges within 1% of
its peak dB performance after 428 training iterations on average, yielding a
peak effective signal-to-noise ratio of only 0.30 dB below the PMD-free case.
Similar to state-of-the-art lumped PMD compensation algorithms in practical
systems, our approach does not assume any knowledge about the particular PMD
realization along the link, nor any knowledge about the total accumulated PMD.
This is a significant improvement compared to prior work on distributed PMD
compensation, where knowledge about the accumulated PMD is typically assumed.
We also compare different parameterization choices in terms of performance,
complexity, and convergence behavior. Lastly, we demonstrate that the learned
models can be successfully retrained after an abrupt change of the PMD
realization along the fiber.Comment: 10 pages, 11 figures, to appear in the IEEE/OSA Journal of Lightwave
Technolog