297 research outputs found

    Harnessing machine learning for fiber-induced nonlinearity mitigation in long-haul coherent optical OFDM

    Get PDF
    © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).Coherent optical orthogonal frequency division multiplexing (CO-OFDM) has attracted a lot of interest in optical fiber communications due to its simplified digital signal processing (DSP) units, high spectral-efficiency, flexibility, and tolerance to linear impairments. However, CO-OFDM’s high peak-to-average power ratio imposes high vulnerability to fiber-induced non-linearities. DSP-based machine learning has been considered as a promising approach for fiber non-linearity compensation without sacrificing computational complexity. In this paper, we review the existing machine learning approaches for CO-OFDM in a common framework and review the progress in this area with a focus on practical aspects and comparison with benchmark DSP solutions.Peer reviewe

    Machine learning for fiber nonlinearity mitigation in long-haul coherent optical transmission systems

    Get PDF
    Fiber nonlinearities from Kerr effect are considered as major constraints for enhancing the transmission capacity in current optical transmission systems. Digital nonlinearity compensation techniques such as digital backpropagation can perform well but require high computing resources. Machine learning can provide a low complexity capability especially for high-dimensional classification problems. Recently several supervised and unsupervised machine learning techniques have been investigated in the field of fiber nonlinearity mitigation. This paper offers a brief review of the principles, performance and complexity of these machine learning approaches in the application of nonlinearity mitigation

    An Overview on Application of Machine Learning Techniques in Optical Networks

    Get PDF
    Today's telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users' behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, Machine Learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude the paper proposing new possible research directions

    Optics for AI and AI for Optics

    Get PDF
    Artificial intelligence is deeply involved in our daily lives via reinforcing the digital transformation of modern economies and infrastructure. It relies on powerful computing clusters, which face bottlenecks of power consumption for both data transmission and intensive computing. Meanwhile, optics (especially optical communications, which underpin today’s telecommunications) is penetrating short-reach connections down to the chip level, thus meeting with AI technology and creating numerous opportunities. This book is about the marriage of optics and AI and how each part can benefit from the other. Optics facilitates on-chip neural networks based on fast optical computing and energy-efficient interconnects and communications. On the other hand, AI enables efficient tools to address the challenges of today’s optical communication networks, which behave in an increasingly complex manner. The book collects contributions from pioneering researchers from both academy and industry to discuss the challenges and solutions in each of the respective fields

    Coherent Optical OFDM Modem Employing Artificial Neural Networks for Dispersion and Nonlinearity Compensation in a Long-Haul Transmission System

    Get PDF
    In order to satisfy the ever increasing demand for the bandwidth requirement in broadband services the optical orthogonal frequency division multiplexing (OOFDM) scheme is being considered as a promising technique for future high-capacity optical networks. The aim of this thesis is to investigate, theoretically, the feasibility of implementing the coherent optical OFDM (CO-OOFDM) technique in long haul transmission networks. For CO-OOFDM and Fast-OFDM systems a set of modulation formats dependent analogue to digital converter (ADC) clipping ratio and the quantization bit have been identified, moreover, CO-OOFDM is more resilient to the chromatic dispersion (CD) when compared to the bandwidth efficient Fast-OFDM scheme. For CO-OOFDM systems numerical simulations are undertaken to investigate the effect of the number of sub-carriers, the cyclic prefix (CP), and ADC associated parameters such as the sampling speed, the clipping ratio, and the quantisation bit on the system performance over single mode fibre (SMF) links for data rates up to 80 Gb/s. The use of a large number of sub-carriers is more effective in combating the fibre CD compared to employing a long CP. Moreover, in the presence of fibre non-linearities identifying the optimum number of sub-carriers is a crucial factor in determining the modem performance. For a range of signal data rates up to 40 Gb/s, a set of data rate and transmission distance-dependent optimum ADC parameters are identified in this work. These parameters give rise to a negligible clipping and quantisation noise, moreover, ADC sampling speed can increase the dispersion tolerance while transmitting over SMF links. In addition, simulation results show that the use of adaptive modulation schemes improves the spectrum usage efficiency, thus resulting in higher tolerance to the CD when compared to the case where identical modulation formats are adopted across all sub-carriers. For a given transmission distance utilizing an artificial neural networks (ANN) equalizer improves the system bit error rate (BER) performance by a factor of 50% and 70%, respectively when considering SMF firstly CD and secondly nonlinear effects with CD. Moreover, for a fixed BER of 10-3 utilizing ANN increases the transmission distance by 1.87 times and 2 times, respectively while considering SMF CD and nonlinear effects. The proposed ANN equalizer performs more efficiently in combating SMF non-linearities than the previously published Kerr nonlinearity electrical compensation technique by a factor of 7

    Blind nonlinearity equalization by machine learning based clustering for single- and multi-channel coherent optical OFDM

    Get PDF
    Fiber-induced intra- and inter-channel nonlinearities are experimentally tackled using blind nonlinear equalization (NLE) by unsupervised machine learning based clustering (MLC) in ∌46-Gb/s single-channel and ∌20-Gb/s (middle-channel) multi-channel coherent multi-carrier signals (OFDM-based). To that end we introduce, for the first time, Hierarchical and Fuzzy-Logic C-means (FLC) based clustering in optical communications. It is shown that among the two proposed MLC algorithms, FLC reveals the highest performance at optimum launched optical powers (LOPs), while at very high LOPs Hierarchical can compensate more effectively nonlinearities only for low-level modulation formats. FLC also outperforms K-means, Fast-Newton support vector machines, supervised artificial neural networks and a NLE with deterministic Volterra analysis, when employing BPSK and QPSK. In particular, for the middle channel of a QPSK WDM coherent optical OFDM system at optimum -5 dBm of LOP and 3200 km of transmission, FLC outperforms Volterra-NLE by 2.5 dB in Q-factor. However, for a 16-quadrature amplitude modulated single-channel system at 2000 km, the performance benefit of FLC over IVSTF reduces to ∌0.4 dB at a LOP of 2 dBm (optimum). Even when using novel sophisticated clustering designs in 16 clusters, no more than additional ∌0.3 dB Q-factor enhancement is observed. Finally, in contrast to the deterministic Volterra-NLE, MLC algorithms can partially tackle the stochastic parametric noise amplification

    Methods for Model Complexity Reduction for the Nonlinear Calibration of Amplifiers Using Volterra Kernels

    Get PDF
    Volterra models allow modeling nonlinear dynamical systems, even though they require the estimation of a large number of parameters and have, consequently, potentially large computational costs. The pruning of Volterra models is thus of fundamental importance to reduce the computational costs of nonlinear calibration, and improve stability and speed, while preserving accuracy. Several techniques (LASSO, DOMP and OBS) and their variants (WLASSO and OBD) are compared in this paper for the experimental calibration of an IF amplifier. The results show that Volterra models can be simplified, yielding models that are 4–5 times sparser, with a limited impact on accuracy. About 6 dB of improved Error Vector Magnitude (EVM) is obtained, improving the dynamic range of the amplifiers. The Symbol Error Rate (SER) is greatly reduced by calibration at a large input power, and pruning reduces the model complexity without hindering SER. Hence, pruning allows improving the dynamic range of the amplifier, with almost an order of magnitude reduction in model complexity. We propose the OBS technique, used in the neural network field, in conjunction with the better known DOMP technique, to prune the model with the best accuracy. The simulations show, in fact, that the OBS and DOMP techniques outperform the others, and OBD, LASSO and WLASSO are, in turn, less efficient. A methodology for pruning in the complex domain is described, based on the Frisch–Waugh–Lovell (FWL) theorem, to separate the linear and nonlinear sections of the model. This is essential because linear models are used for equalization and cannot be pruned to preserve model generality vis-a-vis channel variations, whereas nonlinear models must be pruned as much as possible to minimize the computational overhead. This methodology can be extended to models other than the Volterra one, as the only conditions we impose on the nonlinear model are that it is feedforward and linear in the parameters

    Neural networks for optical channel equalization in high speed communication systems

    Get PDF
    La demande future de bande passante pour les donnĂ©es dĂ©passera les capacitĂ©s des systĂšmes de communication optique actuels, qui approchent de leurs limites en raison des limitations de la bande passante Ă©lectrique des composants de l’émetteur. L’interfĂ©rence intersymbole (ISI) due Ă  cette limitation de bande est le principal facteur de dĂ©gradation pour atteindre des dĂ©bits de donnĂ©es Ă©levĂ©s. Dans ce mĂ©moire, nous Ă©tudions plusieurs techniques de rĂ©seaux neuronaux (NN) pour combattre les limites physiques des composants de l’émetteur pilotĂ©s Ă  des dĂ©bits de donnĂ©es Ă©levĂ©s et exploitant les formats de modulation avancĂ©s avec une dĂ©tection cohĂ©rente. Notre objectif principal avec les NN comme Ă©galiseurs de canaux ISI est de surmonter les limites des rĂ©cepteurs optimaux conventionnels, en fournissant une complexitĂ© Ă©volutive moindre et une solution quasi optimale. Nous proposons une nouvelle architecture bidirectionnelle profonde de mĂ©moire Ă  long terme (BiLSTM), qui est efficace pour attĂ©nuer les graves problĂšmes d’ISI causĂ©s par les composants Ă  bande limitĂ©e. Pour la premiĂšre fois, nous dĂ©montrons par simulation que notre BiLSTM profonde proposĂ©e atteint le mĂȘme taux d’erreur sur les bits(TEB) qu’un estimateur de sĂ©quence Ă  maximum de vraisemblance (MLSE) optimal pour la modulation MDPQ. Les NN Ă©tant des modĂšles pilotĂ©s par les donnĂ©es, leurs performances dĂ©pendent fortement de la qualitĂ© des donnĂ©es d’entrĂ©e. Nous dĂ©montrons comment les performances du BiLSTM profond rĂ©alisable se dĂ©gradent avec l’augmentation de l’ordre de modulation. Nous examinons Ă©galement l’impact de la sĂ©vĂ©ritĂ© de l’ISI et de la longueur de la mĂ©moire du canal sur les performances de la BiLSTM profonde. Nous Ă©tudions les performances de divers canaux synthĂ©tiques Ă  bande limitĂ©e ainsi qu’un canal optique mesurĂ© Ă  100 Gbaud en utilisant un modulateur photonique au silicium (SiP) de 35 GHz. La gravitĂ© ISI de ces canaux est quantifiĂ©e grĂące Ă  une nouvelle vue graphique des performances basĂ©e sur les Ă©carts de performance de base entre les solutions optimales linĂ©aires et non linĂ©aires classiques. Aux ordres QAM supĂ©rieurs Ă  la QPSK, nous quantifions l’écart de performance BiLSTM profond par rapport Ă  la MLSE optimale Ă  mesure que la sĂ©vĂ©ritĂ© ISI augmente. Alors qu’elle s’approche des performances optimales de la MLSE Ă  8QAM et 16QAM avec une pĂ©nalitĂ©, elle est capable de dĂ©passer largement la solution optimale linĂ©aire Ă  32QAM. Plus important encore, l’avantage de l’utilisation de modĂšles d’auto-apprentissage comme les NN est leur capacitĂ© Ă  apprendre le canal pendant la formation, alors que la MLSE optimale nĂ©cessite des informations prĂ©cises sur l’état du canal.The future demand for the data bandwidth will surpass the capabilities of current optical communication systems, which are approaching their limits due to the electrical bandwidth limitations of the transmitter components. Inter-symbol interference (ISI) due to this band limitation is the major degradation factor to achieve high data rates. In this thesis, we investigate several neural network (NN) techniques to combat the physical limits of the transmitter components driven at high data rates and exploiting the advanced modulation formats with coherent detection. Our main focus with NNs as ISI channel equalizers is to overcome the limitations of conventional optimal receivers, by providing lower scalable complexity and near optimal solution. We propose a novel deep bidirectional long short-term memory (BiLSTM) architecture, that is effective in mitigating severe ISI caused by bandlimited components. For the first time, we demonstrate via simulation that our proposed deep BiLSTM achieves the same bit error rate (BER) performance as an optimal maximum likelihood sequence estimator (MLSE) for QPSK modulation. The NNs being data-driven models, their performance acutely depends on input data quality. We demonstrate how the achievable deep BiLSTM performance degrades with the increase in modulation order. We also examine the impact of ISI severity and channel memory length on deep BiLSTM performance. We investigate the performances of various synthetic band-limited channels along with a measured optical channel at 100 Gbaud using a 35 GHz silicon photonic(SiP) modulator. The ISI severity of these channels is quantified with a new graphical view of performance based on the baseline performance gaps between conventional linear and nonlinear optimal solutions. At QAM orders above QPSK, we quantify deep BiLSTM performance deviation from the optimal MLSE as ISI severity increases. While deep BiLSTM approaches the optimal MLSE performance at 8QAM and 16QAM with a penalty, it is able to greatly surpass the linear optimal solution at 32QAM. More importantly, the advantage of using self learning models like NNs is their ability to learn the channel during the training, while the optimal MLSE requires accurate channel state information
    • 

    corecore