482 research outputs found

    Harnessing machine learning for fiber-induced nonlinearity mitigation in long-haul coherent optical OFDM

    Get PDF
    © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).Coherent optical orthogonal frequency division multiplexing (CO-OFDM) has attracted a lot of interest in optical fiber communications due to its simplified digital signal processing (DSP) units, high spectral-efficiency, flexibility, and tolerance to linear impairments. However, CO-OFDM’s high peak-to-average power ratio imposes high vulnerability to fiber-induced non-linearities. DSP-based machine learning has been considered as a promising approach for fiber non-linearity compensation without sacrificing computational complexity. In this paper, we review the existing machine learning approaches for CO-OFDM in a common framework and review the progress in this area with a focus on practical aspects and comparison with benchmark DSP solutions.Peer reviewe

    Optics for AI and AI for Optics

    Get PDF
    Artificial intelligence is deeply involved in our daily lives via reinforcing the digital transformation of modern economies and infrastructure. It relies on powerful computing clusters, which face bottlenecks of power consumption for both data transmission and intensive computing. Meanwhile, optics (especially optical communications, which underpin today’s telecommunications) is penetrating short-reach connections down to the chip level, thus meeting with AI technology and creating numerous opportunities. This book is about the marriage of optics and AI and how each part can benefit from the other. Optics facilitates on-chip neural networks based on fast optical computing and energy-efficient interconnects and communications. On the other hand, AI enables efficient tools to address the challenges of today’s optical communication networks, which behave in an increasingly complex manner. The book collects contributions from pioneering researchers from both academy and industry to discuss the challenges and solutions in each of the respective fields

    An Overview on Application of Machine Learning Techniques in Optical Networks

    Get PDF
    Today's telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users' behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, Machine Learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude the paper proposing new possible research directions

    Coherent Optical OFDM Modem Employing Artificial Neural Networks for Dispersion and Nonlinearity Compensation in a Long-Haul Transmission System

    Get PDF
    In order to satisfy the ever increasing demand for the bandwidth requirement in broadband services the optical orthogonal frequency division multiplexing (OOFDM) scheme is being considered as a promising technique for future high-capacity optical networks. The aim of this thesis is to investigate, theoretically, the feasibility of implementing the coherent optical OFDM (CO-OOFDM) technique in long haul transmission networks. For CO-OOFDM and Fast-OFDM systems a set of modulation formats dependent analogue to digital converter (ADC) clipping ratio and the quantization bit have been identified, moreover, CO-OOFDM is more resilient to the chromatic dispersion (CD) when compared to the bandwidth efficient Fast-OFDM scheme. For CO-OOFDM systems numerical simulations are undertaken to investigate the effect of the number of sub-carriers, the cyclic prefix (CP), and ADC associated parameters such as the sampling speed, the clipping ratio, and the quantisation bit on the system performance over single mode fibre (SMF) links for data rates up to 80 Gb/s. The use of a large number of sub-carriers is more effective in combating the fibre CD compared to employing a long CP. Moreover, in the presence of fibre non-linearities identifying the optimum number of sub-carriers is a crucial factor in determining the modem performance. For a range of signal data rates up to 40 Gb/s, a set of data rate and transmission distance-dependent optimum ADC parameters are identified in this work. These parameters give rise to a negligible clipping and quantisation noise, moreover, ADC sampling speed can increase the dispersion tolerance while transmitting over SMF links. In addition, simulation results show that the use of adaptive modulation schemes improves the spectrum usage efficiency, thus resulting in higher tolerance to the CD when compared to the case where identical modulation formats are adopted across all sub-carriers. For a given transmission distance utilizing an artificial neural networks (ANN) equalizer improves the system bit error rate (BER) performance by a factor of 50% and 70%, respectively when considering SMF firstly CD and secondly nonlinear effects with CD. Moreover, for a fixed BER of 10-3 utilizing ANN increases the transmission distance by 1.87 times and 2 times, respectively while considering SMF CD and nonlinear effects. The proposed ANN equalizer performs more efficiently in combating SMF non-linearities than the previously published Kerr nonlinearity electrical compensation technique by a factor of 7

    Deep Neural Network-based Receiver for Next-generation LEO Satellite Communications

    Get PDF

    Preprint: Using RF-DNA Fingerprints To Classify OFDM Transmitters Under Rayleigh Fading Conditions

    Full text link
    The Internet of Things (IoT) is a collection of Internet connected devices capable of interacting with the physical world and computer systems. It is estimated that the IoT will consist of approximately fifty billion devices by the year 2020. In addition to the sheer numbers, the need for IoT security is exacerbated by the fact that many of the edge devices employ weak to no encryption of the communication link. It has been estimated that almost 70% of IoT devices use no form of encryption. Previous research has suggested the use of Specific Emitter Identification (SEI), a physical layer technique, as a means of augmenting bit-level security mechanism such as encryption. The work presented here integrates a Nelder-Mead based approach for estimating the Rayleigh fading channel coefficients prior to the SEI approach known as RF-DNA fingerprinting. The performance of this estimator is assessed for degrading signal-to-noise ratio and compared with least square and minimum mean squared error channel estimators. Additionally, this work presents classification results using RF-DNA fingerprints that were extracted from received signals that have undergone Rayleigh fading channel correction using Minimum Mean Squared Error (MMSE) equalization. This work also performs radio discrimination using RF-DNA fingerprints generated from the normalized magnitude-squared and phase response of Gabor coefficients as well as two classifiers. Discrimination of four 802.11a Wi-Fi radios achieves an average percent correct classification of 90% or better for signal-to-noise ratios of 18 and 21 dB or greater using a Rayleigh fading channel comprised of two and five paths, respectively.Comment: 13 pages, 14 total figures/images, Currently under review by the IEEE Transactions on Information Forensics and Securit

    Enabling Technologies for Cognitive Optical Networks

    Get PDF
    • …
    corecore