1,268 research outputs found

    Harnessing machine learning for fiber-induced nonlinearity mitigation in long-haul coherent optical OFDM

    Get PDF
    © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).Coherent optical orthogonal frequency division multiplexing (CO-OFDM) has attracted a lot of interest in optical fiber communications due to its simplified digital signal processing (DSP) units, high spectral-efficiency, flexibility, and tolerance to linear impairments. However, CO-OFDM’s high peak-to-average power ratio imposes high vulnerability to fiber-induced non-linearities. DSP-based machine learning has been considered as a promising approach for fiber non-linearity compensation without sacrificing computational complexity. In this paper, we review the existing machine learning approaches for CO-OFDM in a common framework and review the progress in this area with a focus on practical aspects and comparison with benchmark DSP solutions.Peer reviewe

    Machine learning for fiber nonlinearity mitigation in long-haul coherent optical transmission systems

    Get PDF
    Fiber nonlinearities from Kerr effect are considered as major constraints for enhancing the transmission capacity in current optical transmission systems. Digital nonlinearity compensation techniques such as digital backpropagation can perform well but require high computing resources. Machine learning can provide a low complexity capability especially for high-dimensional classification problems. Recently several supervised and unsupervised machine learning techniques have been investigated in the field of fiber nonlinearity mitigation. This paper offers a brief review of the principles, performance and complexity of these machine learning approaches in the application of nonlinearity mitigation

    Preprint: Using RF-DNA Fingerprints To Classify OFDM Transmitters Under Rayleigh Fading Conditions

    Full text link
    The Internet of Things (IoT) is a collection of Internet connected devices capable of interacting with the physical world and computer systems. It is estimated that the IoT will consist of approximately fifty billion devices by the year 2020. In addition to the sheer numbers, the need for IoT security is exacerbated by the fact that many of the edge devices employ weak to no encryption of the communication link. It has been estimated that almost 70% of IoT devices use no form of encryption. Previous research has suggested the use of Specific Emitter Identification (SEI), a physical layer technique, as a means of augmenting bit-level security mechanism such as encryption. The work presented here integrates a Nelder-Mead based approach for estimating the Rayleigh fading channel coefficients prior to the SEI approach known as RF-DNA fingerprinting. The performance of this estimator is assessed for degrading signal-to-noise ratio and compared with least square and minimum mean squared error channel estimators. Additionally, this work presents classification results using RF-DNA fingerprints that were extracted from received signals that have undergone Rayleigh fading channel correction using Minimum Mean Squared Error (MMSE) equalization. This work also performs radio discrimination using RF-DNA fingerprints generated from the normalized magnitude-squared and phase response of Gabor coefficients as well as two classifiers. Discrimination of four 802.11a Wi-Fi radios achieves an average percent correct classification of 90% or better for signal-to-noise ratios of 18 and 21 dB or greater using a Rayleigh fading channel comprised of two and five paths, respectively.Comment: 13 pages, 14 total figures/images, Currently under review by the IEEE Transactions on Information Forensics and Securit

    Performance comparison of blind and non-blind channel equalizers using artificial neural networks

    Get PDF
    In digital communication systems, multipath propagation induces Inter Symbol Interference (ISI). To reduce the effect of ISI different channel equalization algorithms are used. Complex equalization algorithms allow for achieving the best performance but they do not meet the requirements for implementation of real-time detection at low complexity, thus limiting their application. In this paper, we present different blind and non-blind equalization structures based on Artificial Neural Networks (ANNs) and, also, we analyze their complexity versus performance. Since the activation function at the output layer depends on the cost function with respect to the input, in the present work we use mean squared error as loss function for the output layer. The simulated network is based on multilayer feedforward perceptron ANN, which is trained by utilizing the error back-propagation algorithm. The weights of the network are updated in accordance with training of the network to improve the convergence speed. Simulation results demonstrate that the implementation of equalizers using ANN provides an upper hand over the performance and computational complexity with respect to conventional methods
    • …
    corecore