163 research outputs found

    Exact Worst-case Delay in FIFO-multiplexing Feed-forward Networks

    Get PDF
    In this paper, we compute the actual worst-case end-to-end delay for a flow in a feed-forward network of first-in–first-out (FIFO)-multiplexing service curve nodes, where flows are shaped by piecewise-affine concave arrival curves, and service curves are piecewise affine and convex. We show that the worst-case delay problem can be formulated as a mixed integer linear programming problem, whose size grows exponentially with the number of nodes involved. Furthermore, we present approximate solution schemes to find upper and lower delay bounds on the worst-case delay. Both only require to solve just one linear programming problem and yield bounds that are generally more accurate than those found in the previous work, which are computed under more restrictive assumptions

    Network Calculus with Flow Prolongation -- A Feedforward FIFO Analysis enabled by ML

    Full text link
    The derivation of upper bounds on data flows' worst-case traversal times is an important task in many application areas. For accurate bounds, model simplifications should be avoided even in large networks. Network Calculus (NC) provides a modeling framework and different analyses for delay bounding. We investigate the analysis of feedforward networks where all queues implement First-In First-Out (FIFO) service. Correctly considering the effect of data flows onto each other under FIFO is already a challenging task. Yet, the fastest available NC FIFO analysis suffers from limitations resulting in unnecessarily loose bounds. A feature called Flow Prolongation (FP) has been shown to improve delay bound accuracy significantly. Unfortunately, FP needs to be executed within the NC FIFO analysis very often and each time it creates an exponentially growing set of alternative networks with prolongations. FP therefore does not scale and has been out of reach for the exhaustive analysis of large networks. We introduce DeepFP, an approach to make FP scale by predicting prolongations using machine learning. In our evaluation, we show that DeepFP can improve results in FIFO networks considerably. Compared to the standard NC FIFO analysis, DeepFP reduces delay bounds by 12.1% on average at negligible additional computational cost

    Computation of a (min,+) multi-dimensional convolution for end-to-end performance analyzes

    Get PDF
    We investigate how to compute a (min,+) multi-dimensional convolution with application to the worst-case performance analyzes in "Pay Multiplexing Only Once" scenarios. In such scenarios, a flow encounters some cross-traffic along its path and each cross-traffic flow interfers over a connected subpath. When there is no cross-traffic, the analyzes boils down to classical (min,+) convolutions. We provide three proofs to a well-known lemma describing how to compute the convolution of piecewise affine convex functions

    On the schedulability of deadline-constrained traffic in TDMA Wireless Mesh Networks

    Get PDF
    In this paper, we evaluate the schedulability of traffic with arbitrary end-to-end deadline constraints in Wireless Mesh Networks (WMNs). We formulate the problem as a mixed integer linear optimization problem, and show that, depending on the flow aggregation policy used in the network, the problem can be either convex or non-convex. We optimally solve the problem in both cases, and prove that the schedulability does depend on the aggregation policy. This allows us to derive rules of thumb to identify which policy improves the schedulability with a given traffic. Furthermore, we propose a heuristic solution strategy that allows good suboptimal solutions to the scheduling problem to be computed in relatively small times, comparable to those required for online admission control in relatively large WMNs

    Dimensioning and worst-case analysis of cluster-tree sensor networks

    Get PDF
    Modeling the fundamental performance limits of Wireless Sensor Networks (WSNs) is of paramount importance to understand their behavior under the worst-case conditions and to make the appropriate design choices. This is particular relevant for time-sensitive WSN applications, where the timing behavior of the network protocols (message transmission must respect deadlines) impacts on the correct operation of these applications. In that direction this paper contributes with a methodology based on Network Calculus, which enables quick and efficient worst-case dimensioning of static or even dynamically changing cluster-tree WSNs where the data sink can either be static or mobile. We propose closed-form recurrent expressions for computing the worst-case end-to-end delays, buffering and bandwidth requirements across any source-destination path in a cluster-tree WSN. We show how to apply our methodology to the case of IEEE 802.15.4/ZigBee cluster-tree WSNs. Finally, we demonstrate the validity and analyze the accuracy of our methodology through a comprehensive experimental study using commercially available technology, namely TelosB motes running TinyOS

    End-to-end performance guarantees for multipath flows

    Get PDF
    When routing data across a network from one source to one destination, instead of following a fixed path, one can choose to spread data on several routes in order to use all poten- tial ressources of the network. This issue has been studied for many models of networks with various objectives to opti- mize. In this paper we investigate how to route a flow across a network of servers with end-to-end performance guarantees in the framework of Network Calculus. We discuss stability issues (i.e. whether we can ensure that end-to-end delays are bounded) for arbitrary networks, and how to compute bounds on worst-case end-to-end delays and backlogs. The tightness issues are discussed on a small but challenging toy example

    Least Upper Delay Bound for VBR Flows in Networks-on- Chip with Virtual Channels

    Get PDF
    Real-time applications such as multimedia and gaming require stringent performance guarantees, usually enforced by a tight upper bound on the maximum end-to-end delay. For FIFO multiplexed on-chip packet switched networks we consider worst-case delay bounds for Variable Bit-Rate (VBR) flows with aggregate scheduling, which schedules multiple flows as an aggregate flow. VBR Flows are characterized by a maximum transfer size, peak rate, burstiness, and average sustainable rate. Based on network calculus, we present and prove theorems to derive per-flow end-to-end Equivalent Service Curves (ESC) which are in turn used for computing Least Upper Delay Bounds (LUDBs) of individual flows. In a realistic case study we find that the end-to-end delay bound is up to 46.9% more accurate than the case without considering the traffic peak behavior. Likewise, results also show similar improvements for synthetic traffic patterns. The proposed methodology is implemented in C++ and has low run-time complexity, enabling quick evaluation for large and complex SoCs

    On the Robustness of Deep Learning-predicted Contention Models for Network Calculus

    Full text link
    The network calculus (NC) analysis takes a simple model consisting of a network of schedulers and data flows crossing them. A number of analysis "building blocks" can then be applied to capture the model without imposing pessimistic assumptions like self-contention on tandems of servers. Yet, adding pessimism cannot always be avoided. To compute the best bound on a single flow's end-to-end delay thus boils down to finding the least pessimistic contention models for all tandems of schedulers in the network - and an exhaustive search can easily become a very resource intensive task. The literature proposes a promising solution to this dilemma: a heuristic making use of machine learning (ML) predictions inside the NC analysis. While results of this work were promising in terms of delay bound quality and computational effort, there is little to no insight on when a prediction is made or if the trained algorithm can achieve similarly striking results in networks vastly differing from its training data. In this paper, we address these pending questions. We evaluate the influence of the training data and its features on accuracy, impact and scalability. Additionally, we contribute an extension of the method by predicting the best nn contention model alternatives in order to achieve increased robustness for its application outside the training data. Our numerical evaluation shows that good accuracy can still be achieved on large networks although we restrict the training to networks that are two orders of magnitude smaller

    Comparaison de strategies de calcul de bornes sur NoC

    Get PDF
    The Kalray MPPA2-256 processor integrates 256 processing cores and 32 management cores on a chip. Theses cores are grouped into clusters, and clusters are connected by a high-performance network on chip (NoC). This NoC provides some hardware mechanisms (egress traffic limiters) that can be configured to offer bounded latencies. This paper presents how network calculus can be used to bound these latencies while computing the routes of data flows, using linear programming. Then, its shows how other approaches can also be used and adapted to analyze this NoC. Their performances are then compared on three case studies: two small coming from previous studies, and one realistic with 128 or 256 flows. On theses cases studies, it shows that modeling the shaping introduced by links is of major importance to get accurate bounds. And when packets are of constant size, the Total Flow Analysis gives, on average, bounds 20%-25% smaller than all other methods
    • …
    corecore