109,767 research outputs found
An Overview on Application of Machine Learning Techniques in Optical Networks
Today's telecommunication networks have become sources of enormous amounts of
widely heterogeneous data. This information can be retrieved from network
traffic traces, network alarms, signal quality indicators, users' behavioral
data, etc. Advanced mathematical tools are required to extract meaningful
information from these data and take decisions pertaining to the proper
functioning of the networks from the network-generated data. Among these
mathematical tools, Machine Learning (ML) is regarded as one of the most
promising methodological approaches to perform network-data analysis and enable
automated network self-configuration and fault management. The adoption of ML
techniques in the field of optical communication networks is motivated by the
unprecedented growth of network complexity faced by optical networks in the
last few years. Such complexity increase is due to the introduction of a huge
number of adjustable and interdependent system parameters (e.g., routing
configurations, modulation format, symbol rate, coding schemes, etc.) that are
enabled by the usage of coherent transmission/reception technologies, advanced
digital signal processing and compensation of nonlinear effects in optical
fiber propagation. In this paper we provide an overview of the application of
ML to optical communications and networking. We classify and survey relevant
literature dealing with the topic, and we also provide an introductory tutorial
on ML for researchers and practitioners interested in this field. Although a
good number of research papers have recently appeared, the application of ML to
optical networks is still in its infancy: to stimulate further work in this
area, we conclude the paper proposing new possible research directions
A Survey on Compiler Autotuning using Machine Learning
Since the mid-1990s, researchers have been trying to use machine-learning
based approaches to solve a number of different compiler optimization problems.
These techniques primarily enhance the quality of the obtained results and,
more importantly, make it feasible to tackle two main compiler optimization
problems: optimization selection (choosing which optimizations to apply) and
phase-ordering (choosing the order of applying optimizations). The compiler
optimization space continues to grow due to the advancement of applications,
increasing number of compiler optimizations, and new target architectures.
Generic optimization passes in compilers cannot fully leverage newly introduced
optimizations and, therefore, cannot keep up with the pace of increasing
options. This survey summarizes and classifies the recent advances in using
machine learning for the compiler optimization field, particularly on the two
major problems of (1) selecting the best optimizations and (2) the
phase-ordering of optimizations. The survey highlights the approaches taken so
far, the obtained results, the fine-grain classification among different
approaches and finally, the influential papers of the field.Comment: version 5.0 (updated on September 2018)- Preprint Version For our
Accepted Journal @ ACM CSUR 2018 (42 pages) - This survey will be updated
quarterly here (Send me your new published papers to be added in the
subsequent version) History: Received November 2016; Revised August 2017;
Revised February 2018; Accepted March 2018
Machine Learning at the Edge: A Data-Driven Architecture with Applications to 5G Cellular Networks
The fifth generation of cellular networks (5G) will rely on edge cloud
deployments to satisfy the ultra-low latency demand of future applications. In
this paper, we argue that such deployments can also be used to enable advanced
data-driven and Machine Learning (ML) applications in mobile networks. We
propose an edge-controller-based architecture for cellular networks and
evaluate its performance with real data from hundreds of base stations of a
major U.S. operator. In this regard, we will provide insights on how to
dynamically cluster and associate base stations and controllers, according to
the global mobility patterns of the users. Then, we will describe how the
controllers can be used to run ML algorithms to predict the number of users in
each base station, and a use case in which these predictions are exploited by a
higher-layer application to route vehicular traffic according to network Key
Performance Indicators (KPIs). We show that the prediction accuracy improves
when based on machine learning algorithms that rely on the controllers' view
and, consequently, on the spatial correlation introduced by the user mobility,
with respect to when the prediction is based only on the local data of each
single base station.Comment: 15 pages, 10 figures, 5 tables. IEEE Transactions on Mobile Computin
- …