4 research outputs found

    On the Robustness of Deep Learning-predicted Contention Models for Network Calculus

    Full text link
    The network calculus (NC) analysis takes a simple model consisting of a network of schedulers and data flows crossing them. A number of analysis "building blocks" can then be applied to capture the model without imposing pessimistic assumptions like self-contention on tandems of servers. Yet, adding pessimism cannot always be avoided. To compute the best bound on a single flow's end-to-end delay thus boils down to finding the least pessimistic contention models for all tandems of schedulers in the network - and an exhaustive search can easily become a very resource intensive task. The literature proposes a promising solution to this dilemma: a heuristic making use of machine learning (ML) predictions inside the NC analysis. While results of this work were promising in terms of delay bound quality and computational effort, there is little to no insight on when a prediction is made or if the trained algorithm can achieve similarly striking results in networks vastly differing from its training data. In this paper, we address these pending questions. We evaluate the influence of the training data and its features on accuracy, impact and scalability. Additionally, we contribute an extension of the method by predicting the best nn contention model alternatives in order to achieve increased robustness for its application outside the training data. Our numerical evaluation shows that good accuracy can still be achieved on large networks although we restrict the training to networks that are two orders of magnitude smaller

    RouteNet: leveraging graph neural networks for network modeling and optimization in SDN

    Get PDF
    © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Network modeling is a key enabler to achieve efficient network operation in future self-driving Software-Defined Networks. However, we still lack functional network models able to produce accurate predictions of Key Performance Indicators (KPI) such as delay, jitter or loss at limited cost. In this paper we propose RouteNet, a novel network model based on Graph Neural Network (GNN) that is able to understand the complex relationship between topology, routing, and input traffic to produce accurate estimates of the per-source/destination per-packet delay distribution and loss. RouteNet leverages the ability of GNNs to learn and model graph-structured information and as a result, our model is able to generalize over arbitrary topologies, routing schemes and traffic intensity. In our evaluation, we show that RouteNet is able to predict accurately the delay distribution (mean delay and jitter) and loss even in topologies, routing and traffic unseen in the training (worst case MRE = 15.4%). Also, we present several use cases where we leverage the KPI predictions of our GNN model to achieve efficient routing optimization and network planning.This work was supported in part by the Polish Ministryof Science and Higher Education with the subvention funds of the Facultyof Computer Science, Electronics and Telecommunications, AGH University,in part by the Spanish MINECO under Contract TEC2017-90034-C2-1-R(ALLIANCE), in part by the Catalan Institution for Research and AdvancedStudies (ICREA) and the FI-AGAUR Grant by the Catalan Government, andin part by PL-Grid Infrastructure.Peer ReviewedPostprint (author's final draft

    RiskNet: neural risk assessment in networks of unreliable resources

    Get PDF
    We propose a graph neural network (GNN)-based method to predict the distribution of penalties induced by outages in communication networks, where connections are protected by resources shared between working and backup paths. The GNN-based algorithm is trained only with random graphs generated on the basis of the Barabási–Albert model. However, the results obtained show that we can accurately model the penalties in a wide range of existing topologies. We show that GNNs eliminate the need to simulate complex outage scenarios for the network topologies under study—in practice, the entire time of path placement evaluation based on the prediction is no longer than 4 ms on modern hardware. In this way, we gain up to 12 000 times in speed improvement compared to calculations based on simulations.This work was supported by the Polish Ministry of Science and Higher Education with the subvention funds of the Faculty of Computer Science, Electronics and Telecommunications of AGH University of Science and Technology (P.B., P.C.) and by the PL-Grid Infrastructure (K.R.).Peer ReviewedPostprint (published version
    corecore