1,468 research outputs found

    On Time Synchronization Issues in Time-Sensitive Networks with Regulators and Nonideal Clocks

    Full text link
    Flow reshaping is used in time-sensitive networks (as in the context of IEEE TSN and IETF Detnet) in order to reduce burstiness inside the network and to support the computation of guaranteed latency bounds. This is performed using per-flow regulators (such as the Token Bucket Filter) or interleaved regulators (as with IEEE TSN Asynchronous Traffic Shaping). Both types of regulators are beneficial as they cancel the increase of burstiness due to multiplexing inside the network. It was demonstrated, by using network calculus, that they do not increase the worst-case latency. However, the properties of regulators were established assuming that time is perfect in all network nodes. In reality, nodes use local, imperfect clocks. Time-sensitive networks exist in two flavours: (1) in non-synchronized networks, local clocks run independently at every node and their deviations are not controlled and (2) in synchronized networks, the deviations of local clocks are kept within very small bounds using for example a synchronization protocol (such as PTP) or a satellite based geo-positioning system (such as GPS). We revisit the properties of regulators in both cases. In non-synchronized networks, we show that ignoring the timing inaccuracies can lead to network instability due to unbounded delay in per-flow or interleaved regulators. We propose and analyze two methods (rate and burst cascade, and asynchronous dual arrival-curve method) for avoiding this problem. In synchronized networks, we show that there is no instability with per-flow regulators but, surprisingly, interleaved regulators can lead to instability. To establish these results, we develop a new framework that captures industrial requirements on clocks in both non-synchronized and synchronized networks, and we develop a toolbox that extends network calculus to account for clock imperfections.Comment: ACM SIGMETRICS 2020 Boston, Massachusetts, USA June 8-12, 202

    Quality of Service over Specific Link Layers: state of the art report

    Get PDF
    The Integrated Services concept is proposed as an enhancement to the current Internet architecture, to provide a better Quality of Service (QoS) than that provided by the traditional Best-Effort service. The features of the Integrated Services are explained in this report. To support Integrated Services, certain requirements are posed on the underlying link layer. These requirements are studied by the Integrated Services over Specific Link Layers (ISSLL) IETF working group. The status of this ongoing research is reported in this document. To be more specific, the solutions to provide Integrated Services over ATM, IEEE 802 LAN technologies and low-bitrate links are evaluated in detail. The ISSLL working group has not yet studied the requirements, that are posed on the underlying link layer, when this link layer is wireless. Therefore, this state of the art report is extended with an identification of the requirements that are posed on the underlying wireless link, to provide differentiated Quality of Service

    Fast simulation of the leaky bucket algorithm

    Get PDF
    We use fast simulation methods, based on importance sampling, to efficiently estimate cell loss probability in queueing models of the Leaky Bucket algorithm. One of these models was introduced by Berger (1991), in which the rare event of a cell loss is related to the rare event of an empty finite buffer in an "overloaded" queue. In particular, we propose a heuristic change of measure for importance sampling to efficiently estimate the probability of the rare empty-buffer event in an asymptotically unstable GI/GI/1/k queue. This change of measure is, in a way, "dual" to that proposed by Parekh and Walrand (1989) to estimate the probability of a rare buffer overflow event. We present empirical results to demonstrate the effectiveness of our fast simulation method. Since we have not yet obtained a mathematical proof, we can only conjecture that our heuristic is asymptotically optimal, as k/spl rarr//spl infin/

    Time Based Traffic Policing and Shaping Algorithms on Campus Network Internet Traffic

    Get PDF
    This paper presents the development of algorithm on Policing and Shaping Traffic for bandwidth management which serves as Quality of Services (QoS) in a Campus network. The Campus network is connected with a 16 Mbps Virtual Private Network line to the internet Wide Area Network. Both inbound and outbound real internet traffic were captured and analyzed. Goodness of Fit (GoF) test with Anderson-darling (AD) was fitted to real traffic to identify the best distribution. The Best-fitted Cumulative Distribution Function (CDF) model was used to analyze and characterized the data and the parameters. Based on the identified parameters, a new Time Based Policing and Shaping algorithm have been developed and simulated. The policing process drops the burst traffic, while the shaping process delays traffic to the next time transmissions. Mathematical model to formulate the controlled algorithm on burst traffic with selected time has been derived. Inbound traffic threshold control burst was policed at 1200 MByte (MB) while outbound traffic threshold was policed at 680 MB in the algorithms. The algorithms were varied in relation to the identified Weibull parameters to reduce the burst. The analysis shows that the higher shape parameter value that relates to the lower burst of network throughput can be controlled. This research presented a new method for time based bandwidth management and an enhanced network performance by identifying new traffic parameters for traffic modeling in Campus network
    • …
    corecore