430 research outputs found
How to Integrate Machine-Learning Probabilistic Output in Integer Linear Programming: a case for RSA
We integrate machine-learning-based QoT estimation in reach constraints of an integer linear program (ILP) for routing and spectrum assignment (RSA), and develop an iterative solution for QoT-aware RSA. Results show above 30% spectrum savings compared to solving RSA with ILP using traditional margined reach computation
Dual-Stage Planning for Elastic Optical Networks Integrating Machine-Learning-Assisted QoT Estimation
Following the emergence of Elastic Optical Networks (EONs), Machine Learning (ML) has been intensively investigated as a promising methodology to address complex network management tasks, including, e.g., Quality of Transmission (QoT) estimation, fault management, and automatic adjustment of transmission parameters. Though several ML-based solutions for specific tasks have been proposed, how to integrate the outcome of such ML approaches inside Routing and Spectrum Assignment (RSA) models (which address the fundamental planning problem in EONs) is still an open research problem. In this study, we propose a dual-stage iterative RSA optimization framework that incorporates the QoT estimations provided by a ML regressor, used to define lightpaths' reach constraints, into a Mixed Integer Linear Programming (MILP) formulation. The first stage minimizes the overall spectrum occupation, whereas the second stage maximizes the minimum inter-channel spacing between neighbor channels, without increasing the overall spectrum occupation obtained in the previous stage. During the second stage, additional interference constraints are generated, and these constraints are then added to the MILP at the next iteration round to exclude those lightpaths combinations that would exhibit unacceptable QoT. Our illustrative numerical results on realistic EON instances show that the proposed ML-assisted framework achieves spectrum occupation savings up to 52.4% (around 33% on average) in comparison to a traditional MILP-based RSA framework that uses conservative reach constraints based on margined analytical models
Federated-Learning-Assisted Failure-Cause Identification in Microwave Networks
Machine Learning (ML) adoption for automated failure management is becoming pervasive in today's communication networks. However, ML-based failure management typically requires that monitoring data is exchanged between network devices, where data is collected, and centralized locations, e.g., servers in data centers, where data is processed. ML algorithms in this centralized location are then trained to learn mappings between collected data and desired outputs, e.g., whether a failure exists, its cause, location, etc. This paradigm poses several challenges to network operators in terms of privacy as well as in terms of computational and communication resource usage, as a massive amount of sensible failure data is transmitted over the network. To overcome such limitations, Federated Learning (FL) can be adopted, which consists of training multiple distributed ML models at multiple decentralized locations (called 'clients') using a limited amount of locally-collected data, and of sharing these trained models to a centralized location (called 'server'), where these models are aggregated and shared again with clients. FL reduces data exchange between clients and a server and improves algorithms' performance thanks to sharing knowledge among different domains (i.e., clients), leveraging different sources of local information in a collaborative environment. In this paper, we focus on applying FL to perform failure-cause identification in microwave networks. The problem is modeled as a multi-class ML classification problem with six pre-defined failure causes. Specifically, using real failure data from an operational microwave network composed of more than 10000 microwave links, we emulate a multi-operator scenario in which one operator has partial knowledge of failure causes during the training phase. Thanks to knowledge sharing, numerical results show that FL achieves up to 72% precision in identifying an unknown particular class concerning traditional ML (non- FL) approaches where training is performed without knowledge sharing
To be neutral or not neutral? the in-network caching dilemma
Caching allows Internet Service Providers (ISPs) to reduce network traffic and Content Providers (CPs) to increase the offered QoS. However, when contents are encrypted, effective caching is possible only if ISPs and CPs cooperate. We suggest possible forms of non-discriminatory cooperation that make caching compliant with the principles of Net-Neutrality (NN
Transceivers and Spectrum Usage Minimization in Few-Mode Optical Networks
Metro-Area networks are likely to create the right conditions for the deployment of few-mode transmission (FMT) due to limited metro distances and rapidly increasing metro traffic. To address the new network design problems arising with the adoption of FMT, integer linear programming (ILP) formulations have already been developed to optimally assign modulation formats, baud rates, and transmission modes to lightpaths, but these formulations lack scalability, especially when they incorporate accurate constraints to capture inter-modal coupling. In this paper, we propose a heuristic approach for the routing, modulation format, baud rate and spectrum allocation in FMT networks with arbitrary topology, accounting for inter-modal coupling and for distance-Adaptive reaches of few-mode (specifically, up to five modes) signals generated by either full multi-in multi-out (MIMO) or low-complexity MIMO transceivers and for two different switching scenarios (i.e., spatial full-joint and fractional-joint switching). In our illustrative numerical analysis, we first confirm the quasi-optimality of our heuristic by comparing it to the optimal ILP solutions, and then we use our heuristic to identify which switching scenario and FMT transceiver technology minimize spectrum occupation and transceiver costs, depending on the relative costs of transceiver equipment and dark fiber leasing
A Tutorial on Machine Learning for Failure Management in Optical Networks
Failure management plays a role of capital importance in optical networks to avoid service disruptions and to satisfy customers' service level agreements. Machine learning (ML) promises to revolutionize the (mostly manual and human-driven) approaches in which failure management in optical networks has been traditionally managed, by introducing automated methods for failure prediction, detection, localization, and identification. This tutorial provides a gentle introduction to some ML techniques that have been recently applied in the field of the optical-network failure management. It then introduces a taxonomy to classify failure-management tasks and discusses possible applications of ML for these failure management tasks. Finally, for a reader interested in more implementative details, we provide a step-by-step description of how to solve a representative example of a practical failure-management task
Poster: Continual Network Learning
We make a case for in-network Continual Learning as a solution for seamless adaptation to evolving network conditions without forgetting past experiences. We propose implementing Active Learning-based selective data filtering in the data plane, allowing for data-efficient continual updates. We explore relevant challenges and propose future research directions
Machine learning regression for QoT estimation of unestablished lightpaths
Estimating the quality of transmission (QoT) of a candidate lightpath prior to its establishment is of pivotal importance for effective decision making in resource allocation for optical networks. Several recent studies investigated machine learning (ML) methods to accurately predict whether the configuration of a prospective lightpath satisfies a given threshold on a QoT metric such as the generalized signal-To-noise ratio (GSNR) or the bit error rate. Given a set of features, the GSNR for a given lightpath configuration may still exhibit variations, as it depends on several other factors not captured by the features considered. It follows that the GSNR associated with a lightpath configuration can be modeled as a random variable and thus be characterized by a probability distribution function. However, most of the existing approaches attempt to directly answer the question is a given lightpath configuration (e.g., with a given modulation format) feasible on a certain path? but do not consider the additional benefit that estimating the entire statistical distribution of the metric under observation can provide. Hence, in this paper, we investigate how to employ ML regression approaches to estimate the distribution of the received GSNR of unestablished lightpaths. In particular, we discuss and assess the performance of three regression approaches by leveraging synthetic data obtained by means of two different data generation tools. We evaluate the performance of the three proposed approaches on a realistic network topology in terms of root mean squared error and R2 score and compare them against a baseline approach that simply predicts the GSNR mean value. Moreover, we provide a cost analysis by attributing penalties to incorrect deployment decisions and emphasize the benefits of leveraging the proposed estimation approaches from the point of view of a network operator, which is allowed to make more informed decisions about lightpath deployment with respect to state-of-The-Art QoT classification techniques
- …