29 research outputs found

    Reducing probes for quality of transmission estimation in optical networks with active learning

    Get PDF
    Estimating the quality of transmission (QoT) of a lightpath before its establishment is a critical procedure for efficient design and management of optical networks. Recently, supervised machine learning (ML) techniques for QoT estimation have been proposed as an effective alternative to well-established, yet approximated, analytic models that often require the introduction of conservative margins to compensate for model inaccuracies and uncertainties. Unfortunately, to ensure high estimation accuracy, the training set (i.e., the set of historical field data, or "samples," required to train these supervised ML algorithms) must be very large, while in real network deployments, the number of monitored/monitorable lightpaths is limited by several practical considerations. This is especially true for lightpaths with an above-threshold bit error rate (BER) (i.e., malfunctioning or wrongly dimensioned lightpaths), which are infrequently observed during network operation. Samples with above-threshold BERs can be acquired by deploying probe lightpaths, but at the cost of increased operational expenditures and wastage of spectral resources. In this paper, we propose to use active learning to reduce the number of probes needed for ML-based QoT estimation. We build an estimation model based on Gaussian processes, which allows iterative identification of those QoT instances that minimize estimation uncertainty. Numerical results using synthetically generated datasets show that, by using the proposed active learning approach, we can achieve the same performance of standard offline supervised ML methods, but with a remarkable reduction (at least 5% and up to 75%) in the number of training samples

    Comparison of domain adaptation and active learning techniques for quality of transmission estimation with small-sized training datasets [Invited]

    Get PDF
    5siMachine learning (ML) is currently being investigated as an emerging technique to automate quality of transmission (QoT) estimation during lightpath deployment procedures in optical networks. Even though the potential network-resource savings enabled by ML-based QoT estimation has been confirmed in several studies, some practical limitations hinder its adoption in operational network deployments. Among these, the lack of a comprehensive training dataset is recognized as a main limiting factor, especially in the early network deployment phase. In this study, we compare the performance of two ML methodologies explicitly designed to augment small-sized training datasets, namely, active learning (AL) and domain adaptation (DA), for the estimation of the signal-to-noise ratio (SNR) of an unestablished lightpath. This comparison also allows us to provide some guidelines for the adoption of these two techniques at different life stages of a newly deployed optical network infrastructure. Results show that both AL and DA permit us, starting from limited datasets, to reach a QoT estimation capability similar to that achieved by standard supervised learning approaches working on much larger datasets. More specifically, we observe that a few dozen additional samples acquired from selected probe lightpaths already provide significant performance improvement for AL, whereas a few hundred samples gathered from an external network topology are needed in the case of DA.partially_openembargoed_20211111Azzimonti, Dario; Rottondi, Cristina; Giusti, Alessandro; Tornatore, Massimo; Bianco, AndreaAzzimonti, Dario; Rottondi, Cristina; Giusti, Alessandro; Tornatore, Massimo; Bianco, Andre

    Machine learning regression for QoT estimation of unestablished lightpaths

    Get PDF
    Estimating the quality of transmission (QoT) of a candidate lightpath prior to its establishment is of pivotal importance for effective decision making in resource allocation for optical networks. Several recent studies investigated machine learning (ML) methods to accurately predict whether the configuration of a prospective lightpath satisfies a given threshold on a QoT metric such as the generalized signal-To-noise ratio (GSNR) or the bit error rate. Given a set of features, the GSNR for a given lightpath configuration may still exhibit variations, as it depends on several other factors not captured by the features considered. It follows that the GSNR associated with a lightpath configuration can be modeled as a random variable and thus be characterized by a probability distribution function. However, most of the existing approaches attempt to directly answer the question is a given lightpath configuration (e.g., with a given modulation format) feasible on a certain path? but do not consider the additional benefit that estimating the entire statistical distribution of the metric under observation can provide. Hence, in this paper, we investigate how to employ ML regression approaches to estimate the distribution of the received GSNR of unestablished lightpaths. In particular, we discuss and assess the performance of three regression approaches by leveraging synthetic data obtained by means of two different data generation tools. We evaluate the performance of the three proposed approaches on a realistic network topology in terms of root mean squared error and R2 score and compare them against a baseline approach that simply predicts the GSNR mean value. Moreover, we provide a cost analysis by attributing penalties to incorrect deployment decisions and emphasize the benefits of leveraging the proposed estimation approaches from the point of view of a network operator, which is allowed to make more informed decisions about lightpath deployment with respect to state-of-The-Art QoT classification techniques

    On the benefits of domain adaptation techniques for quality of transmission estimation in optical networks

    Get PDF
    Machine learning (ML) is increasingly applied in optical network management, especially in cross-layer frameworks where physical layer characteristics may trigger changes at the network layer due to transmission performance measurements (quality of transmission, QoT) monitored by optical equipment. Leveraging ML-based QoT estimation approaches has proven to be a promising alternative to exploiting classical mathematical methods or transmission simulation tools. However, supervised ML models rely on large representative training sets, which are often unavailable, due to the lack of the necessary telemetry equipment or of historical data. In such cases, it can be useful to use training data collected from a different network. Unfortunately, the resulting models may be uneffective when applied to the current network, if the training data (the source domain) is not well representative of the network under study (the target domain). Domain adaptation (DA) techniques aim at tackling this issue, to make possible the transfer of knowledge among different networks. This paper compares several DA approaches applied to the problem of estimating the QoT of an optical lightpath using a supervised ML approach. Results show that, when the number of samples from the target domain is limited to a few dozen, DA approaches consistently outperform standard supervised ML techniques

    Centralized and Distributed Machine Learning-Based QoT Estimation for Sliceable Optical Networks

    Full text link
    Dynamic network slicing has emerged as a promising and fundamental framework for meeting 5G's diverse use cases. As machine learning (ML) is expected to play a pivotal role in the efficient control and management of these networks, in this work we examine the ML-based Quality-of-Transmission (QoT) estimation problem under the dynamic network slicing context, where each slice has to meet a different QoT requirement. We examine ML-based QoT frameworks with the aim of finding QoT model/s that are fine-tuned according to the diverse QoT requirements. Centralized and distributed frameworks are examined and compared according to their accuracy and training time. We show that the distributed QoT models outperform the centralized QoT model, especially as the number of diverse QoT requirements increases.Comment: accepted for presentation at the IEEE GLOBECOM 201

    Machine-learning method for quality of transmission prediction of unestablished lightpaths

    Get PDF
    Predicting the quality of transmission (QoT) of a lightpath prior to its deployment is a step of capital importance for an optimized design of optical networks. Due to the continuous advances in optical transmission, the number of design parameters available to system engineers (e.g., modulation formats, baud rate, code rate, etc.) is growing dramatically, thus significantly increasing the alternative scenarios for lightpath deployment. As of today, existing (pre-deployment) estimation techniques for lightpath QoT belong to two categories: "exact" analytical models estimating physical-layer impairments, which provide accurate results but incur heavy computational requirements, and margined formulas, which are computationally faster but typically introduce high link margins that lead to underutilization of network resources. In this paper, we explore a third option, i.e., machine learning (ML), as ML techniques have already been successfully applied for optimization and performance prediction of complex systems where analytical models are hard to derive and/ or numerical procedures impose high computational burden. We investigate a ML classifier that predicts whether the bit error rate of unestablished lightpaths meets the required system threshold based on traffic volume, desired route, and modulation format. The classifier is trained and tested on synthetic data and its performance is assessed over different network topologies and for various combinations of classification features. Results in terms of classifier accuracy are promising and motivate further investigation over real field data

    Dual-Stage Planning for Elastic Optical Networks Integrating Machine-Learning-Assisted QoT Estimation

    Get PDF
    Following the emergence of Elastic Optical Networks (EONs), Machine Learning (ML) has been intensively investigated as a promising methodology to address complex network management tasks, including, e.g., Quality of Transmission (QoT) estimation, fault management, and automatic adjustment of transmission parameters. Though several ML-based solutions for specific tasks have been proposed, how to integrate the outcome of such ML approaches inside Routing and Spectrum Assignment (RSA) models (which address the fundamental planning problem in EONs) is still an open research problem. In this study, we propose a dual-stage iterative RSA optimization framework that incorporates the QoT estimations provided by a ML regressor, used to define lightpaths' reach constraints, into a Mixed Integer Linear Programming (MILP) formulation. The first stage minimizes the overall spectrum occupation, whereas the second stage maximizes the minimum inter-channel spacing between neighbor channels, without increasing the overall spectrum occupation obtained in the previous stage. During the second stage, additional interference constraints are generated, and these constraints are then added to the MILP at the next iteration round to exclude those lightpaths combinations that would exhibit unacceptable QoT. Our illustrative numerical results on realistic EON instances show that the proposed ML-assisted framework achieves spectrum occupation savings up to 52.4% (around 33% on average) in comparison to a traditional MILP-based RSA framework that uses conservative reach constraints based on margined analytical models
    corecore