17 research outputs found

    A Novel Admission Control Model in Cloud Computing

    Full text link
    With the rapid development of Cloud computing technologies and wide adopt of Cloud services and applications, QoS provisioning in Clouds becomes an important research topic. In this paper, we propose an admission control mechanism for Cloud computing. In particular we consider the high volume of simultaneous requests for Cloud services and develop admission control for aggregated traffic flows to address this challenge. By employ network calculus, we determine effective bandwidth for aggregate flow, which is used for making admission control decision. In order to improve network resource allocation while achieving Cloud service QoS, we investigate the relationship between effective bandwidth and equivalent capacity. We have also conducted extensive experiments to evaluate performance of the proposed admission control mechanism

    On the Robustness of Deep Learning-predicted Contention Models for Network Calculus

    Full text link
    The network calculus (NC) analysis takes a simple model consisting of a network of schedulers and data flows crossing them. A number of analysis "building blocks" can then be applied to capture the model without imposing pessimistic assumptions like self-contention on tandems of servers. Yet, adding pessimism cannot always be avoided. To compute the best bound on a single flow's end-to-end delay thus boils down to finding the least pessimistic contention models for all tandems of schedulers in the network - and an exhaustive search can easily become a very resource intensive task. The literature proposes a promising solution to this dilemma: a heuristic making use of machine learning (ML) predictions inside the NC analysis. While results of this work were promising in terms of delay bound quality and computational effort, there is little to no insight on when a prediction is made or if the trained algorithm can achieve similarly striking results in networks vastly differing from its training data. In this paper, we address these pending questions. We evaluate the influence of the training data and its features on accuracy, impact and scalability. Additionally, we contribute an extension of the method by predicting the best nn contention model alternatives in order to achieve increased robustness for its application outside the training data. Our numerical evaluation shows that good accuracy can still be achieved on large networks although we restrict the training to networks that are two orders of magnitude smaller

    Exact Worst-case Delay in FIFO-multiplexing Feed-forward Networks

    Get PDF
    In this paper, we compute the actual worst-case end-to-end delay for a flow in a feed-forward network of first-in–first-out (FIFO)-multiplexing service curve nodes, where flows are shaped by piecewise-affine concave arrival curves, and service curves are piecewise affine and convex. We show that the worst-case delay problem can be formulated as a mixed integer linear programming problem, whose size grows exponentially with the number of nodes involved. Furthermore, we present approximate solution schemes to find upper and lower delay bounds on the worst-case delay. Both only require to solve just one linear programming problem and yield bounds that are generally more accurate than those found in the previous work, which are computed under more restrictive assumptions

    5G Infrastructure Network Slicing: E2E Mean Delay Model and Effectiveness Assessment to Reduce Downtimes in Industry 4.0

    Get PDF
    This work has been partially funded by the H2020 project 5G-CLARITY (Grant No. 871428) and the Spanish national project TRUE-5G (PID2019-108713RB-C53).Fifth Generation (5G) is expected to meet stringent performance network requisites of the Industry 4.0. Moreover, its built-in network slicing capabilities allow for the support of the traffic heterogeneity in Industry 4.0 over the same physical network infrastructure. However, 5G network slicing capabilities might not be enough in terms of degree of isolation for many private 5G networks use cases, such as multi-tenancy in Industry 4.0. In this vein, infrastructure network slicing, which refers to the use of dedicated and well isolated resources for each network slice at every network domain, fits the necessities of those use cases. In this article, we evaluate the effectiveness of infrastructure slicing to provide isolation among production lines (PLs) in an industrial private 5G network. To that end, we develop a queuing theory-based model to estimate the end-to-end (E2E) mean packet delay of the infrastructure slices. Then, we use this model to compare the E2E mean delay for two configurations, i.e., dedicated infrastructure slices with segregated resources for each PL against the use of a single shared infrastructure slice to serve the performance-sensitive traffic from PLs. Also we evaluate the use of Time-Sensitive Networking (TSN) against bare Ethernet to provide layer 2 connectivity among the 5G system components. We use a complete and realistic setup based on experimental and simulation data of the scenario considered. Our results support the effectiveness of infrastructure slicing to provide isolation in performance among the different slices. Then, using dedicated slices with segregated resources for each PL might reduce the number of the production downtimes and associated costs as the malfunctioning of a PL will not affect the network performance perceived by the performance-sensitive traffic from other PLs. Last, our results show that, besides the improvement in performance, TSN technology truly provides full isolation in the transport network compared to standard Ethernet thanks to traffic prioritization, traffic regulation, and bandwidth reservation capabilities.H2020 project 5G-CLARITY 871428Spanish Government PID2019-108713RB-C53TRUE-5

    Comparaison de strategies de calcul de bornes sur NoC

    Get PDF
    The Kalray MPPA2-256 processor integrates 256 processing cores and 32 management cores on a chip. Theses cores are grouped into clusters, and clusters are connected by a high-performance network on chip (NoC). This NoC provides some hardware mechanisms (egress traffic limiters) that can be configured to offer bounded latencies. This paper presents how network calculus can be used to bound these latencies while computing the routes of data flows, using linear programming. Then, its shows how other approaches can also be used and adapted to analyze this NoC. Their performances are then compared on three case studies: two small coming from previous studies, and one realistic with 128 or 256 flows. On theses cases studies, it shows that modeling the shaping introduced by links is of major importance to get accurate bounds. And when packets are of constant size, the Total Flow Analysis gives, on average, bounds 20%-25% smaller than all other methods

    Stability and performance guarantees in networks with cyclic dependencies

    Get PDF
    With the development of real-time networks such as reactive embedded systems, there is a need to compute deterministic performance bounds. This paper focuses on the performance guarantees and stability conditions in networks with cyclic dependencies in the network calculus framework. We first propose an algorithm that computes tight backlog bounds in tree networks for any set of flows crossing a server. Then, we show how this algorithm can be applied to improve bounds from the literature fir any topology, including cyclic networks. In particular, we show that the ring is stable in the network calculus framework

    Algorithmique du Network Calculus

    Get PDF
    Le Network Calculus est une théorie visant à calculer des bornes pire-cas sur les performances des réseaux de communication. Le réseau est modélisé par un graphe orienté où les noeuds représentent des serveurs, et les flux traversant le réseau doivent suivre les arcs. S'ajoutent à cela des contraintes sur les courbes de trafic (la quantité de données passées par un point depuis la mise en route du réseau) et sur les courbes de service (la quantité de travail fournie par chaque serveur). Pour borner les performances pire-cas, comme la charge en différents points ou les délais de bout en bout, ces enveloppes sont combinées à l'aide d'opérateurs issus notamment des algèbres tropicales : min, +, convolution-(min, +)... Cette thèse est centrée sur l'algorithmique du Network Calculus, à savoir comment rendre effectif ce formalisme. Ce travail nous a amené d'abord à comparer les variations présentes dans la littérature sur les modèles utilisés, révélant des équivalences d'expressivité comme entre le Real-Time Calculus et le Network Calculus. Dans un deuxième temps, nous avons proposé un nouvel opérateur (min, +) pour traiter le calcul de performances en présence d'agrégation de flux, et nous avons étudié le cas des réseaux sans dépendances cycliques sur les flux et avec politique de service quelconque. Nous avons montré la difficulté algorithmique d'obtenir précisément les pires cas, mais nous avons aussi fourni une nouvelle heuristique pour les calculer. Elle s'avère de complexité polynomiale dans des cas intéressants.Network Calculus is a theory aiming at computing worst-case bounds on performances in communication networks. The network is usually modelled by a digraph : the servers are located on the nodes and the flows must follow path in the digraph. There are constraints on the trafic curves (how much data have been through a given point since the activation of the network) and on the service curves (how much work each server may provide). To derive bounds on the worst-case performances, as the backlog or the end-to-end delay, these envelopes are combined thanks to tropical algebra operators: min, +, convolution... This thesis focuses on Network Calculus algorithmics, that is how effective is this formalism. This work led us to compare various models in the litterature, and to show expressiveness equivalence between Real-Time Calculus and Network Calculus. Then, we suggested a new (min, +) operator to compute performances bounds in networks with agregated flows and we studied feed-forward networks under blind multiplexing. We showed the difficulty to compute these bounds, but we gave an heuristic, which is polynomial for interesting cases.LYON-ENS Sciences (693872304) / SudocSudocFranceF

    Tight performance bounds in the worst-case analysis of feed-forward networks

    Get PDF
    International audienceNetwork Calculus theory aims at evaluating worst-case performances in communication networks. It provides methods to analyze models where the traffic and the services are constrained by some minimum and/or maximum envelopes (service/arrival curves). While new applications come forward, a challenging and inescapable issue remains open: achieving tight analyzes of networks with aggregate multiplexing. The theory offers efficient methods to bound maximum end-to-end delays or local backlogs. However as shown recently, those bounds can be arbitrarily far from the exact worst-case values, even in seemingly simple feed-forward networks (two flows and two servers), under blind multiplexing (i.e. no information about the scheduling policies, except FIFO per flow). For now, only a network with three flows and three servers, as well as a tandem network called sink tree, have been analyzed tightly. We describe the first algorithm which computes the maximum end-to-end delay for a given flow, as well as the maximum backlog at a server, for any feed-forward network under blind multiplex-ing, with concave arrival curves and convex service curves. Its computational complexity may look expensive (possibly super-exponential), but we show that the problem is intrinsically difficult (NP-hard). Fortunately we show that in some cases, like tandem networks with cross-traffic interfering along intervals of servers, the complexity becomes polynomial. We also compare ourselves to the previous approaches and discuss the problems left open
    corecore