313 research outputs found

    Optimal WDM Power Allocation via Deep Learning for Radio on Free Space Optics Systems

    Full text link
    Radio on Free Space Optics (RoFSO), as a universal platform for heterogeneous wireless services, is able to transmit multiple radio frequency signals at high rates in free space optical networks. This paper investigates the optimal design of power allocation for Wavelength Division Multiplexing (WDM) transmission in RoFSO systems. The proposed problem is a weighted total capacity maximization problem with two constraints of total power limitation and eye safety concern. The model-based Stochastic Dual Gradient algorithm is presented first, which solves the problem exactly by exploiting the null duality gap. The model-free Primal-Dual Deep Learning algorithm is then developed to learn and optimize the power allocation policy with Deep Neural Network (DNN) parametrization, which can be utilized without any knowledge of system models. Numerical simulations are performed to exhibit significant performance of our algorithms compared to the average equal power allocation

    An Overview on Application of Machine Learning Techniques in Optical Networks

    Get PDF
    Today's telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users' behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, Machine Learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude the paper proposing new possible research directions

    Advanced Technique and Future Perspective for Next Generation Optical Fiber Communications

    Get PDF
    Optical fiber communication industry has gained unprecedented opportunities and achieved rapid progress in recent years. However, with the increase of data transmission volume and the enhancement of transmission demand, the optical communication field still needs to be upgraded to better meet the challenges in the future development. Artificial intelligence technology in optical communication and optical network is still in its infancy, but the existing achievements show great application potential. In the future, with the further development of artificial intelligence technology, AI algorithms combining channel characteristics and physical properties will shine in optical communication. This reprint introduces some recent advances in optical fiber communication and optical network, and provides alternative directions for the development of the next generation optical fiber communication technology

    Principles of Neuromorphic Photonics

    Full text link
    In an age overrun with information, the ability to process reams of data has become crucial. The demand for data will continue to grow as smart gadgets multiply and become increasingly integrated into our daily lives. Next-generation industries in artificial intelligence services and high-performance computing are so far supported by microelectronic platforms. These data-intensive enterprises rely on continual improvements in hardware. Their prospects are running up against a stark reality: conventional one-size-fits-all solutions offered by digital electronics can no longer satisfy this need, as Moore's law (exponential hardware scaling), interconnection density, and the von Neumann architecture reach their limits. With its superior speed and reconfigurability, analog photonics can provide some relief to these problems; however, complex applications of analog photonics have remained largely unexplored due to the absence of a robust photonic integration industry. Recently, the landscape for commercially-manufacturable photonic chips has been changing rapidly and now promises to achieve economies of scale previously enjoyed solely by microelectronics. The scientific community has set out to build bridges between the domains of photonic device physics and neural networks, giving rise to the field of \emph{neuromorphic photonics}. This article reviews the recent progress in integrated neuromorphic photonics. We provide an overview of neuromorphic computing, discuss the associated technology (microelectronic and photonic) platforms and compare their metric performance. We discuss photonic neural network approaches and challenges for integrated neuromorphic photonic processors while providing an in-depth description of photonic neurons and a candidate interconnection architecture. We conclude with a future outlook of neuro-inspired photonic processing.Comment: 28 pages, 19 figure

    Telecommunication Systems

    Get PDF
    This book is based on both industrial and academic research efforts in which a number of recent advancements and rare insights into telecommunication systems are well presented. The volume is organized into four parts: "Telecommunication Protocol, Optimization, and Security Frameworks", "Next-Generation Optical Access Technologies", "Convergence of Wireless-Optical Networks" and "Advanced Relay and Antenna Systems for Smart Networks." Chapters within these parts are self-contained and cross-referenced to facilitate further study

    Anwendung von maschinellem Lernen in der optischen Nachrichtenübertragungstechnik

    Get PDF
    Aufgrund des zunehmenden Datenverkehrs wird erwartet, dass die optischen Netze zukünftig mit höheren Systemkapazitäten betrieben werden. Dazu wird bspw. die kohärente Übertragung eingesetzt, bei der das Modulationsformat erhöht werden kann, erforder jedoch ein größeres SNR. Um dies zu erreichen, wird die optische Signalleistung erhöht, wodurch die Datenübertragung durch die nichtlinearen Beeinträchtigungen gestört wird. Der Schwerpunkt dieser Arbeit liegt auf der Entwicklung von Modellen des maschinellen Lernens, die auf diese nichtlineare Signalverschlechterung reagieren. Es wird die Support-Vector-Machine (SVM) implementiert und als klassifizierende Entscheidungsmaschine verwendet. Die Ergebnisse zeigen, dass die SVM eine verbesserte Kompensation sowohl der nichtlinearen Fasereffekte als auch der Verzerrungen der optischen Systemkomponenten ermöglicht. Das Prinzip von EONs bietet eine Technologie zur effizienten Nutzung der verfügbaren Ressourcen, die von der optischen Faser bereitgestellt werden. Ein Schlüsselelement der Technologie ist der bandbreitenvariable Transponder, der bspw. die Anpassung des Modulationsformats oder des Codierungsschemas an die aktuellen Verbindungsbedingungen ermöglicht. Um eine optimale Ressourcenauslastung zu gewährleisten wird der Einsatz von Algorithmen des Reinforcement Learnings untersucht. Die Ergebnisse zeigen, dass der RL-Algorithmus in der Lage ist, sich an unbekannte Link-Bedingungen anzupassen, während vergleichbare heuristische Ansätze wie der genetische Algorithmus für jedes Szenario neu trainiert werden müssen
    corecore