46,175 research outputs found
An Overview on Application of Machine Learning Techniques in Optical Networks
Today's telecommunication networks have become sources of enormous amounts of
widely heterogeneous data. This information can be retrieved from network
traffic traces, network alarms, signal quality indicators, users' behavioral
data, etc. Advanced mathematical tools are required to extract meaningful
information from these data and take decisions pertaining to the proper
functioning of the networks from the network-generated data. Among these
mathematical tools, Machine Learning (ML) is regarded as one of the most
promising methodological approaches to perform network-data analysis and enable
automated network self-configuration and fault management. The adoption of ML
techniques in the field of optical communication networks is motivated by the
unprecedented growth of network complexity faced by optical networks in the
last few years. Such complexity increase is due to the introduction of a huge
number of adjustable and interdependent system parameters (e.g., routing
configurations, modulation format, symbol rate, coding schemes, etc.) that are
enabled by the usage of coherent transmission/reception technologies, advanced
digital signal processing and compensation of nonlinear effects in optical
fiber propagation. In this paper we provide an overview of the application of
ML to optical communications and networking. We classify and survey relevant
literature dealing with the topic, and we also provide an introductory tutorial
on ML for researchers and practitioners interested in this field. Although a
good number of research papers have recently appeared, the application of ML to
optical networks is still in its infancy: to stimulate further work in this
area, we conclude the paper proposing new possible research directions
Recommended from our members
Machine learning approach for computing optical properties of a photonic crystal fiber
Photonic crystal fibers (PCFs) are the specialized optical waveguides that led to many interesting applications ranging from nonlinear optical signal processing to high-power fiber amplifiers. In this paper, machine learning techniques are used to compute various optical properties including effective index, effective mode area, dispersion and confinement loss for a solid-core PCF. These machine learning algorithms based on artificial neural networks are able to make accurate predictions of above-mentioned optical properties for usual parameter space of wavelength ranging from 0.5-1.8 Āµm, pitch from 0.8-2.0 Āµm, diameter by pitch from 0.6-0.9 and number of rings as 4 or 5 in a silica solid-core PCF. We demonstrate the use of simple and fast-training feed-forward artificial neural networks that predicts the output for unknown device parameters faster than conventional numerical simulation techniques. Computation runtimes required with neural networks (for training and testing) and Lumerical MODE solutions are also compared
Deep Learning Framework for Wireless Systems: Applications to Optical Wireless Communications
Optical wireless communication (OWC) is a promising technology for future
wireless communications owing to its potentials for cost-effective network
deployment and high data rate. There are several implementation issues in the
OWC which have not been encountered in radio frequency wireless communications.
First, practical OWC transmitters need an illumination control on color,
intensity, and luminance, etc., which poses complicated modulation design
challenges. Furthermore, signal-dependent properties of optical channels raise
non-trivial challenges both in modulation and demodulation of the optical
signals. To tackle such difficulties, deep learning (DL) technologies can be
applied for optical wireless transceiver design. This article addresses recent
efforts on DL-based OWC system designs. A DL framework for emerging image
sensor communication is proposed and its feasibility is verified by simulation.
Finally, technical challenges and implementation issues for the DL-based
optical wireless technology are discussed.Comment: To appear in IEEE Communications Magazine, Special Issue on
Applications of Artificial Intelligence in Wireless Communication
A Very Brief Introduction to Machine Learning With Applications to Communication Systems
Given the unprecedented availability of data and computing resources, there
is widespread renewed interest in applying data-driven machine learning methods
to problems for which the development of conventional engineering solutions is
challenged by modelling or algorithmic deficiencies. This tutorial-style paper
starts by addressing the questions of why and when such techniques can be
useful. It then provides a high-level introduction to the basics of supervised
and unsupervised learning. For both supervised and unsupervised learning,
exemplifying applications to communication networks are discussed by
distinguishing tasks carried out at the edge and at the cloud segments of the
network at different layers of the protocol stack
The future of computing beyond Moore's Law.
Moore's Law is a techno-economic model that has enabled the information technology industry to double the performance and functionality of digital electronics roughly every 2 years within a fixed cost, power and area. Advances in silicon lithography have enabled this exponential miniaturization of electronics, but, as transistors reach atomic scale and fabrication costs continue to rise, the classical technological driver that has underpinned Moore's Law for 50 years is failing and is anticipated to flatten by 2025. This article provides an updated view of what a post-exascale system will look like and the challenges ahead, based on our most recent understanding of technology roadmaps. It also discusses the tapering of historical improvements, and how it affects options available to continue scaling of successors to the first exascale machine. Lastly, this article covers the many different opportunities and strategies available to continue computing performance improvements in the absence of historical technology drivers. This article is part of a discussion meeting issue 'Numerical algorithms for high-performance computational science'
- ā¦