5,043 research outputs found

    Nonlinear Interference Mitigation via Deep Neural Networks

    Get PDF
    A neural-network-based approach is presented to efficiently implement digital backpropagation (DBP). For a 32x100km fiber-optic link, the resulting "learned" DBP significantly reduces the complexity compared to conventional DBP implementations

    Machine learning for fiber nonlinearity mitigation in long-haul coherent optical transmission systems

    Get PDF
    Fiber nonlinearities from Kerr effect are considered as major constraints for enhancing the transmission capacity in current optical transmission systems. Digital nonlinearity compensation techniques such as digital backpropagation can perform well but require high computing resources. Machine learning can provide a low complexity capability especially for high-dimensional classification problems. Recently several supervised and unsupervised machine learning techniques have been investigated in the field of fiber nonlinearity mitigation. This paper offers a brief review of the principles, performance and complexity of these machine learning approaches in the application of nonlinearity mitigation

    An Overview on Application of Machine Learning Techniques in Optical Networks

    Get PDF
    Today's telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users' behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, Machine Learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude the paper proposing new possible research directions

    Deep learning for interference cancellation in non-orthogonal signal based optical communication systems

    Get PDF
    Non-orthogonal waveforms are groups of signals, which improve spectral efficiency but at the cost of interference. A recognized waveform, termed spectrally efficient frequency division multiplexing (SEFDM), which was a technique initially proposed for wireless systems, has been extensively studied in 60 GHz millimeter wave communications, optical access network design and long haul optical fiber transmission. Experimental demonstrations have shown the advantages of SEFDM in its bandwidth saving, data rate improvement, power efficiency improvement and transmission distance extension compared to conventional orthogonal communication techniques. However, the achieved success of SEFDM is at the cost of complex signal processing for the mitigation of the self-created inter carrier interference (ICI). Thus, a low complexity interference cancellation approach is in urgent need. Recently, deep learning has been applied in optical communication systems to compensate for linear and non-linear distortions in orthogonal frequency division multiplexing (OFDM) signals. The multiple processing layers of deep neural networks (DNN) can simplify signal processing models and can efficiently solve un-deterministic problems. However, there are no reports on the use of deep learning to deal with interference in non-orthogonal signals. DNN can learn complex interference features using backpropagation mechanism. This work will present our investigations on the performance improvement of interference cancellation for the non-orthogonal signal using various deep neural networks. Simulation results show that the interference within SEFDM signals can be mitigated efficiently via using properly designed neural networks. It also indicates a high correlation between neural networks and signal waveforms. It verifies that in order to achieve the optimal performance, all the neurons at each layer have to be connected. Partially connected neural networks cannot learn complete interference and therefore cannot recover signals efficiently. This work paves the way for the research of simplifying neural networks design via signal waveform optimization

    Neural networks for optical channel equalization in high speed communication systems

    Get PDF
    La demande future de bande passante pour les données dépassera les capacités des systèmes de communication optique actuels, qui approchent de leurs limites en raison des limitations de la bande passante électrique des composants de l’émetteur. L’interférence intersymbole (ISI) due à cette limitation de bande est le principal facteur de dégradation pour atteindre des débits de données élevés. Dans ce mémoire, nous étudions plusieurs techniques de réseaux neuronaux (NN) pour combattre les limites physiques des composants de l’émetteur pilotés à des débits de données élevés et exploitant les formats de modulation avancés avec une détection cohérente. Notre objectif principal avec les NN comme égaliseurs de canaux ISI est de surmonter les limites des récepteurs optimaux conventionnels, en fournissant une complexité évolutive moindre et une solution quasi optimale. Nous proposons une nouvelle architecture bidirectionnelle profonde de mémoire à long terme (BiLSTM), qui est efficace pour atténuer les graves problèmes d’ISI causés par les composants à bande limitée. Pour la première fois, nous démontrons par simulation que notre BiLSTM profonde proposée atteint le même taux d’erreur sur les bits(TEB) qu’un estimateur de séquence à maximum de vraisemblance (MLSE) optimal pour la modulation MDPQ. Les NN étant des modèles pilotés par les données, leurs performances dépendent fortement de la qualité des données d’entrée. Nous démontrons comment les performances du BiLSTM profond réalisable se dégradent avec l’augmentation de l’ordre de modulation. Nous examinons également l’impact de la sévérité de l’ISI et de la longueur de la mémoire du canal sur les performances de la BiLSTM profonde. Nous étudions les performances de divers canaux synthétiques à bande limitée ainsi qu’un canal optique mesuré à 100 Gbaud en utilisant un modulateur photonique au silicium (SiP) de 35 GHz. La gravité ISI de ces canaux est quantifiée grâce à une nouvelle vue graphique des performances basée sur les écarts de performance de base entre les solutions optimales linéaires et non linéaires classiques. Aux ordres QAM supérieurs à la QPSK, nous quantifions l’écart de performance BiLSTM profond par rapport à la MLSE optimale à mesure que la sévérité ISI augmente. Alors qu’elle s’approche des performances optimales de la MLSE à 8QAM et 16QAM avec une pénalité, elle est capable de dépasser largement la solution optimale linéaire à 32QAM. Plus important encore, l’avantage de l’utilisation de modèles d’auto-apprentissage comme les NN est leur capacité à apprendre le canal pendant la formation, alors que la MLSE optimale nécessite des informations précises sur l’état du canal.The future demand for the data bandwidth will surpass the capabilities of current optical communication systems, which are approaching their limits due to the electrical bandwidth limitations of the transmitter components. Inter-symbol interference (ISI) due to this band limitation is the major degradation factor to achieve high data rates. In this thesis, we investigate several neural network (NN) techniques to combat the physical limits of the transmitter components driven at high data rates and exploiting the advanced modulation formats with coherent detection. Our main focus with NNs as ISI channel equalizers is to overcome the limitations of conventional optimal receivers, by providing lower scalable complexity and near optimal solution. We propose a novel deep bidirectional long short-term memory (BiLSTM) architecture, that is effective in mitigating severe ISI caused by bandlimited components. For the first time, we demonstrate via simulation that our proposed deep BiLSTM achieves the same bit error rate (BER) performance as an optimal maximum likelihood sequence estimator (MLSE) for QPSK modulation. The NNs being data-driven models, their performance acutely depends on input data quality. We demonstrate how the achievable deep BiLSTM performance degrades with the increase in modulation order. We also examine the impact of ISI severity and channel memory length on deep BiLSTM performance. We investigate the performances of various synthetic band-limited channels along with a measured optical channel at 100 Gbaud using a 35 GHz silicon photonic(SiP) modulator. The ISI severity of these channels is quantified with a new graphical view of performance based on the baseline performance gaps between conventional linear and nonlinear optimal solutions. At QAM orders above QPSK, we quantify deep BiLSTM performance deviation from the optimal MLSE as ISI severity increases. While deep BiLSTM approaches the optimal MLSE performance at 8QAM and 16QAM with a penalty, it is able to greatly surpass the linear optimal solution at 32QAM. More importantly, the advantage of using self learning models like NNs is their ability to learn the channel during the training, while the optimal MLSE requires accurate channel state information

    6G White Paper on Machine Learning in Wireless Communication Networks

    Full text link
    The focus of this white paper is on machine learning (ML) in wireless communications. 6G wireless communication networks will be the backbone of the digital transformation of societies by providing ubiquitous, reliable, and near-instant wireless connectivity for humans and machines. Recent advances in ML research has led enable a wide range of novel technologies such as self-driving vehicles and voice assistants. Such innovation is possible as a result of the availability of advanced ML models, large datasets, and high computational power. On the other hand, the ever-increasing demand for connectivity will require a lot of innovation in 6G wireless networks, and ML tools will play a major role in solving problems in the wireless domain. In this paper, we provide an overview of the vision of how ML will impact the wireless communication systems. We first give an overview of the ML methods that have the highest potential to be used in wireless networks. Then, we discuss the problems that can be solved by using ML in various layers of the network such as the physical layer, medium access layer, and application layer. Zero-touch optimization of wireless networks using ML is another interesting aspect that is discussed in this paper. Finally, at the end of each section, important research questions that the section aims to answer are presented
    • …
    corecore