77 research outputs found

    On Time Synchronization Issues in Time-Sensitive Networks with Regulators and Nonideal Clocks

    Full text link
    Flow reshaping is used in time-sensitive networks (as in the context of IEEE TSN and IETF Detnet) in order to reduce burstiness inside the network and to support the computation of guaranteed latency bounds. This is performed using per-flow regulators (such as the Token Bucket Filter) or interleaved regulators (as with IEEE TSN Asynchronous Traffic Shaping). Both types of regulators are beneficial as they cancel the increase of burstiness due to multiplexing inside the network. It was demonstrated, by using network calculus, that they do not increase the worst-case latency. However, the properties of regulators were established assuming that time is perfect in all network nodes. In reality, nodes use local, imperfect clocks. Time-sensitive networks exist in two flavours: (1) in non-synchronized networks, local clocks run independently at every node and their deviations are not controlled and (2) in synchronized networks, the deviations of local clocks are kept within very small bounds using for example a synchronization protocol (such as PTP) or a satellite based geo-positioning system (such as GPS). We revisit the properties of regulators in both cases. In non-synchronized networks, we show that ignoring the timing inaccuracies can lead to network instability due to unbounded delay in per-flow or interleaved regulators. We propose and analyze two methods (rate and burst cascade, and asynchronous dual arrival-curve method) for avoiding this problem. In synchronized networks, we show that there is no instability with per-flow regulators but, surprisingly, interleaved regulators can lead to instability. To establish these results, we develop a new framework that captures industrial requirements on clocks in both non-synchronized and synchronized networks, and we develop a toolbox that extends network calculus to account for clock imperfections.Comment: ACM SIGMETRICS 2020 Boston, Massachusetts, USA June 8-12, 202

    The robustness of stability under link and node failures

    Get PDF
    AbstractIn the area of communication systems, stability refers to the property of keeping the amount of traffic in the system always bounded over time. Different communication system models have been proposed in order to capture the unpredictable behavior of some users and applications. Among those proposed models the adversarial queueing theory (aqt) model turned out to be the most adequate to analyze an unpredictable network. Until now, most of the research done in this field did not consider the possibility of the adversary producing failures on the network structure. The adversarial models proposed in this work incorporate the possibility of dealing with node and link failures provoked by the adversary. Such failures produce temporal disruptions of the connectivity of the system and increase the collisions of packets in the intermediate hosts of the network, and thus the average traffic load. Under such a scenario, the network is required to be equipped with some mechanism for dealing with those collisions.In addition to proposing adversarial models for faulty systems we study the relation between the robustness of the stability of the system and the management of the queues affected by the failures. When the adversary produces link or node failures the queues associated to the corresponding links can be affected in many different ways depending on whether they can receive or serve packets, or rather that they cannot. In most of the cases, protocols and networks containing very simple topologies, which were known to be universally stable in the aqt model, turn out to be unstable under some of the newly proposed adversarial models. This shows that universal stability of networks is not a robust property in the presence of failures

    Fabric-on-a-Chip: Toward Consolidating Packet Switching Functions on Silicon

    Get PDF
    The switching capacity of an Internet router is often dictated by the memory bandwidth required to bu€er arriving packets. With the demand for greater capacity and improved service provisioning, inherent memory bandwidth limitations are encountered rendering input queued (IQ) switches and combined input and output queued (CIOQ) architectures more practical. Output-queued (OQ) switches, on the other hand, offer several highly desirable performance characteristics, including minimal average packet delay, controllable Quality of Service (QoS) provisioning and work-conservation under any admissible traffic conditions. However, the memory bandwidth requirements of such systems is O(NR), where N denotes the number of ports and R the data rate of each port. Clearly, for high port densities and data rates, this constraint dramatically limits the scalability of the switch. In an effort to retain the desirable attributes of output-queued switches, while significantly reducing the memory bandwidth requirements, distributed shared memory architectures, such as the parallel shared memory (PSM) switch/router, have recently received much attention. The principle advantage of the PSM architecture is derived from the use of slow-running memory units operating in parallel to distribute the memory bandwidth requirement. At the core of the PSM architecture is a memory management algorithm that determines, for each arriving packet, the memory unit in which it will be placed. However, to date, the computational complexity of this algorithm is O(N), thereby limiting the scalability of PSM switches. In an effort to overcome the scalability limitations, it is the goal of this dissertation to extend existing shared-memory architecture results while introducing the notion of Fabric on a Chip (FoC). In taking advantage of recent advancements in integrated circuit technologies, FoC aims to facilitate the consolidation of as many packet switching functions as possible on a single chip. Accordingly, this dissertation introduces a novel pipelined memory management algorithm, which plays a key role in the context of on-chip output- queued switch emulation. We discuss in detail the fundamental properties of the proposed scheme, along with hardware-based implementation results that illustrate its scalability and performance attributes. To complement the main effort and further support the notion of FoC, we provide performance analysis of output queued cell switches with heterogeneous traffic. The result is a flexible tool for obtaining bounds on the memory requirements in output queued switches under a wide range of tra¹ c scenarios. Additionally, we present a reconfigurable high-speed hardware architecture for real-time generation of packets for the various traffic scenarios. The work presented in this thesis aims at providing pragmatic foundations for designing next-generation, high-performance Internet switches and routers

    Minimax Optimal Estimation of Stability Under Distribution Shift

    Full text link
    The performance of decision policies and prediction models often deteriorates when applied to environments different from the ones seen during training. To ensure reliable operation, we propose and analyze the stability of a system under distribution shift, which is defined as the smallest change in the underlying environment that causes the system's performance to deteriorate beyond a permissible threshold. In contrast to standard tail risk measures and distributionally robust losses that require the specification of a plausible magnitude of distribution shift, the stability measure is defined in terms of a more intuitive quantity: the level of acceptable performance degradation. We develop a minimax optimal estimator of stability and analyze its convergence rate, which exhibits a fundamental phase shift behavior. Our characterization of the minimax convergence rate shows that evaluating stability against large performance degradation incurs a statistical cost. Empirically, we demonstrate the practical utility of our stability framework by using it to compare system designs on problems where robustness to distribution shift is critical

    Scheduling in CDMA-based wireless packet networks.

    Get PDF
    Thesis (M.Sc. Eng.)-University of Natal, Durban, 2003.Modern networks carry a wide range of different data types, each with its own individual requirements. The scheduler plays an important role in enabling a network to meet all these requirements. In wired networks a large amount of research has been performed on various schedulers, most of which belong to the family of General Processor Sharing (GPS) schedulers. In this dissertation we briefly discuss the work that has been done on a range of wired schedulers, which all attempt to differentiate between heterogeneous traffic. In the world of wireless communications the scheduler plays a very important role, since it can take channel conditions into account to further improve the performance of the network. The main focus of this dissertation is to introduce schedulers, which attempt to meet the Quality of Service requirements of various data types in a wireless environment. Examples of schedulers that take channel conditions into account are the Modified Largest Weighted Delay First (M-LWDF), as well as a new scheduler introduced in this dissertation, known as the Wireless Fair Largest Weighted Delay First (WF-LWDF) algorithm. The two schemes are studied in detail and a comparison of their throughput, delay, power, and packet dropping performance is made through a range of simulations. The results are compared to the performance offour other schedulers. The fairness ofM-LWDF and WFLWDF is determined through simulations. The throughput results are used to establish Chernoff bounds of the fairness of these two algorithms. Finally, a summary is given of the published delay bounds of various schedulers, and the tightness of the resultant bounds is discussed

    TSN-FlexTest: Flexible TSN Measurement Testbed (Extended Version)

    Full text link
    Robust, reliable, and deterministic networks are essential for a variety of applications. In order to provide guaranteed communication network services, Time-Sensitive Networking (TSN) unites a set of standards for time-synchronization, flow control, enhanced reliability, and management. We design the TSN-FlexTest testbed with generic commodity hardware and open-source software components to enable flexible TSN measurements. We have conducted extensive measurements to validate the TSN-FlexTest testbed and to examine TSN characteristics. The measurements provide insights into the effects of TSN configurations, such as increasing the number of synchronization messages for the Precision Time Protocol, indicating that a measurement accuracy of 15 ns can be achieved. The TSN measurements included extensive evaluations of the Time-aware Shaper (TAS) for sets of Tactile Internet (TI) packet traffic streams. The measurements elucidate the effects of different scheduling and shaping approaches, while revealing the need for pervasive network control that synchronizes the sending nodes with the network switches. We present the first measurements of distributed TAS with synchronized senders on a commodity hardware testbed, demonstrating the same Quality-of-Service as with dedicated wires for high-priority TI streams despite a 200% over-saturation cross traffic load. The testbed is provided as an open-source project to facilitate future TSN research.Comment: 30 pages, 18 figures, 6 tables, IEEE TNSM, in print, 2024. Shorter version in print in IEEE Trans. on Network and Service Management (see related DOI below

    A Logically Centralized Approach for Control and Management of Large Computer Networks

    Get PDF
    Management of large enterprise and Internet Service Provider networks is a complex, error-prone, and costly challenge. It is widely accepted that the key contributors to this complexity are the bundling of control and data forwarding in traditional routers and the use of fully distributed protocols for network control. To address these limitations, the networking research community has been pursuing the vision of simplifying the functional role of a router to its primary task of packet forwarding. This enables centralizing network control at a decision plane where network-wide state can be maintained, and network control can be centrally and consistently enforced. However, scalability and fault-tolerance concerns with physical centralization motivate the need for a more flexible and customizable approach. This dissertation is an attempt at bridging the gap between the extremes of distribution and centralization of network control. We present a logically centralized approach for the design of network decision plane that can be realized by using a set of physically distributed controllers in a network. This approach is aimed at giving network designers the ability to customize the level of control and management centralization according to the scalability, fault-tolerance, and responsiveness requirements of their networks. Our thesis is that logical centralization provides a robust, reliable, and efficient paradigm for management of large networks and we present several contributions to prove this thesis. For network planning, we describe techniques for optimizing the placement of network controllers and provide guidance on the physical design of logically centralized networks. For network operation, algorithms for maintaining dynamic associations between the decision plane and network devices are presented, along with a protocol that allows a set of network controllers to coordinate their decisions, and present a unified interface to the managed network devices. Furthermore, we study the trade-offs in decision plane application design and provide guidance on application state and logic distribution. Finally, we present results of extensive numerical and simulative analysis of the feasibility and performance of our approach. The results show that logical centralization can provide better scalability and fault-tolerance while maintaining performance similarity with traditional distributed approach
    • 

    corecore