3 research outputs found

    Photonic machine learning implementation for signal recovery in optical communications

    Get PDF
    Machine learning techniques have proven very efficient in assorted classification tasks. Nevertheless, processing time-dependent high-speed signals can turn into an extremely challenging task, especially when these signals have been nonlinearly distorted. Recently, analogue hardware concepts using nonlinear transient responses have been gaining significant interest for fast information processing. Here, we introduce a simplified photonic reservoir computing scheme for data classification of severely distorted optical communication signals after extended fibre transmission. To this end, we convert the direct bit detection process into a pattern recognition problem. Using an experimental implementation of our photonic reservoir computer, we demonstrate an improvement in bit-error-rate by two orders of magnitude, compared to directly classifying the transmitted signal. This improvement corresponds to an extension of the communication range by over 75%. While we do not yet reach full real-time post-processing at telecom rates, we discuss how future designs might close the gap

    Reducing Errors in Optical Data Transmission Using Trainable Machine Learning Methods

    Get PDF
    Reducing Bit Error Ratio (BER) and improving performance of modern coherent optical communication system is a significant issue. As the distance travelled by the information signal increases, the bit error ratio will degrade. Machine learning techniques (ML) have been used in applications associated with optical communication systems. The most common machine learning techniques that have been used in applications of optical communication systems are artificial neural networks, Bayesian analysis, and support vector machines (SVMs). This thesis investigates how to improve the bit error ratio in optical data transmission using a trainable machine learning method (ML), that is, a Support Vector Machine (SVM). SVM is a successful machine learning method for pattern recognition, which outperformed the conventional threshold method based on measuring the phase value of each symbol's central sample. In order that the described system can be implemented in hardware, this thesis focuses on applications of SVM with a linear kernel due to the fact that the linear separator is easier to be built in hardware at the desired high speed required of the decoder. In this thesis, using an SVM to reduce the bit error ratio of signals that travel over various distances has been investigated thoroughly. Especially, particular attention has been paid to using the neighbouring information of each symbol being decoded. To further improve the bit error ratio, the wavelet transforms (WT) technique has been employed to reduce the noise of distorted optical signals; however the method did not bring the sort of improvements that the proponents of wavelets led me to believe. It has been found that the most significant improvement of bit error ratio over the current threshold method is to use a number of neighbours on either side of the symbol being decoded. This works much better than using more information from the symbol itself

    Correcting Errors in Optical Data Transmission Using Neural Networks

    No full text
    “The original publication is available at www.springerlink.com”. Copyright Springer [Full text of this article is not available in the UHRA]Optical data communication systems are prone to a variety of processes that modify the transmitted signal, and contribute errors in the determination of 1s from 0s. This is a difficult, and commercially important, problem to solve. Errors must be detected and corrected at high speed, and the classifier must be very accurate; ideally it should also be tunable to the characteristics of individual communication links. We show that simple single layer neural networks may be used to address these problems, and examine how different input representations affect the accuracy of bit error correction. Our results lead us to conclude that a system based on these principles can perform at least as well as an existing non-trainable error correction system, whilst being tunable to suit the individual characteristics of different communication links.Peer reviewe
    corecore