7,816 research outputs found

    Timed Consistent Network Updates

    Full text link
    Network updates such as policy and routing changes occur frequently in Software Defined Networks (SDN). Updates should be performed consistently, preventing temporary disruptions, and should require as little overhead as possible. Scalability is increasingly becoming an essential requirement in SDN. In this paper we propose to use time-triggered network updates to achieve consistent updates. Our proposed solution requires lower overhead than existing update approaches, without compromising the consistency during the update. We demonstrate that accurate time enables far more scalable consistent updates in SDN than previously available. In addition, it provides the SDN programmer with fine-grained control over the tradeoff between consistency and scalability.Comment: This technical report is an extended version of the paper "Timed Consistent Network Updates", which was accepted to the ACM SIGCOMM Symposium on SDN Research (SOSR) '15, Santa Clara, CA, US, June 201

    A fine-grain time-sharing Time Warp system

    Get PDF
    Although Parallel Discrete Event Simulation (PDES) platforms relying on the Time Warp (optimistic) synchronization protocol already allow for exploiting parallelism, several techniques have been proposed to further favor performance. Among them we can mention optimized approaches for state restore, as well as techniques for load balancing or (dynamically) controlling the speculation degree, the latter being specifically targeted at reducing the incidence of causality errors leading to waste of computation. However, in state of the art Time Warp systems, events’ processing is not preemptable, which may prevent the possibility to promptly react to the injection of higher priority (say lower timestamp) events. Delaying the processing of these events may, in turn, give rise to higher incidence of incorrect speculation. In this article we present the design and realization of a fine-grain time-sharing Time Warp system, to be run on multi-core Linux machines, which makes systematic use of event preemption in order to dynamically reassign the CPU to higher priority events/tasks. Our proposal is based on a truly dual mode execution, application vs platform, which includes a timer-interrupt based support for bringing control back to platform mode for possible CPU reassignment according to very fine grain periods. The latter facility is offered by an ad-hoc timer-interrupt management module for Linux, which we release, together with the overall time-sharing support, within the open source ROOT-Sim platform. An experimental assessment based on the classical PHOLD benchmark and two real world models is presented, which shows how our proposal effectively leads to the reduction of the incidence of causality errors, as compared to traditional Time Warp, especially when running with higher degrees of parallelism

    Running real time distributed simulations under Linux and CERTI

    Get PDF
    This paper presents some experiments and some results to enforce real time distributed simulations in accordance with the High Level Architecture (HLA). Simulations were run by using CERTI, an open source middleware, as the Run Time Infrastructure (RTI). Models were distributed over computers under various available versions of the 2.6 Linux kernel. Studies and experiments relied on a real case study. The chosen case study was the simulation of an "in formation" flight of observation satellites. This case study brings up some real applicative needs in real time distributed simulations and real configurations of simulators and models. Two simulations of "in formation" flight of satellites were studied. The study consisted in modeling the behaviour of the simulators and in running these models by using various kernel or middleware operating mechanisms and services. Time measurements were performed at each test giving some results on the ability of the simulation to meet its real time requirements

    The Octopus switch

    Get PDF
    This chapter1 discusses the interconnection architecture of the Mobile Digital Companion. The approach to build a low-power handheld multimedia computer presented here is to have autonomous, reconfigurable modules such as network, video and audio devices, interconnected by a switch rather than by a bus, and to offload as much as work as possible from the CPU to programmable modules placed in the data streams. Thus, communication between components is not broadcast over a bus but delivered exactly where it is needed, work is carried out where the data passes through, bypassing the memory. The amount of buffering is minimised, and if it is required at all, it is placed right on the data path, where it is needed. A reconfigurable internal communication network switch called Octopus exploits locality of reference and eliminates wasteful data copies. The switch is implemented as a simplified ATM switch and provides Quality of Service guarantees and enough bandwidth for multimedia applications. We have built a testbed of the architecture, of which we will present performance and energy consumption characteristics

    Centralized vs distributed communication scheme on switched ethernet for embedded military applications

    Get PDF
    Current military communication network is a generation old and is no longer effective in meeting the emerging requirements imposed by the future embedded military applications. Therefore, a new interconnection system is needed to overcome these limitations. Two new communication networks based upon Full Duplex Switched Ethernet are presented herein in this aim. The first one uses a distributed communication scheme where equipments can emit their data simultaneously, which clearly improves system’s throughput and flexibility. However, migrating all existing applications into a compliant form could be an expensive step. To avoid this process, the second proposal consists in keeping the current centralized communication scheme. Our objective is to assess and compare the real time guarantees that each proposal can offer. The paper includes the functional description of each proposed communication network and a military avionic application to highlight proposals ability to support the required time constrained communications

    Minimizing internal speedup for performance guaranteed optical packet switches

    Get PDF
    Providing QoS guarantee for Internet services is very important It evokes the issue that packet switches should provide guaranteed performance (i.e. 100% throughput with bounded worst-case delay). Optical switching technology is widely considered as an excellent solution for packet switches in future networks. However, to achieve guaranteed performance in optical packet switches, an internal speedup is required due to the existence of reconfiguration overhead. How to reduce the internal speedup is the main concern for making these switches practical In this paper, we first derive the internal speedup S as a function of the number of switch configurations N S and the reconfiguration overhead δ, or S=f(N S,δ). We show that the recently proposed ADJUST algorithm is flawed. Based on the internal speedup function we derived, a new algorithm (ADAPTIVE), with time complexity of O((λ-l)N 2logN), is proposed to minimize S. © 2004 IEEE.published_or_final_versio

    Minimizing the impact of delay on live SVC-based HTTP adaptive streaming services

    Get PDF
    HTTP Adaptive Streaming (HAS) is becoming the de-facto standard for Over-The-Top video streaming services. Video content is temporally split into segments which are offered at multiple qualities to the clients. These clients autonomously select the quality layer matching the current state of the network through a quality selection heuristic. Recently, academia and industry have begun evaluating the feasibility of adopting layered video coding for HAS. Instead of downloading one file for a certain quality level, scalable video streaming requires downloading several interdependent layers to obtain the same quality. This implies that the base layer is always downloaded and is available for playout, even when throughput fluctuates and enhancement layers can not be downloaded in time. This layered video approach can help in providing better service quality assurance for video streaming. However, adopting scalable video coding for HAS also leads to other issues, since requesting multiple files over HTTP leads to an increased impact of the end-to-end delay and thus on the service provided to the client. This is even worse in a Live TV scenario where the drift on the live signal should be minimized, requiring smaller segment and buffer sizes. In this paper, we characterize the impact of delay on several measurement-based heuristics. Furthermore, we propose several ways to overcome the end-to-end delay issues, such as parallel and pipelined downloading of segment layers, to provide a higher quality for the video service

    Traffic scheduling in non-blocking optical packet switches with minimum delay

    Get PDF
    For performance guaranteed OPS switches with reconfiguration overhead, it has been shown that packet delay can be minimized by using N switch configurations (where N is the switch size) to schedule the traffic. However, this usually involves an exorbitant speedup requirement, which makes it impractical under current technology. In this paper, a new minimum-delay scheduling algorithm QLEF (Quasi Largest-Entry-First) is proposed. We prove that QLEF pushes the required speedup bound to the lowest known level. As an example, when N=950, QLEF only requires a speedup of S schedule=21.33 instead of 42.25 for MIN [5] and 30.27 for α i-SCALE [8]. This gives a 50% improvement over MIN and 30% over α i-SCALE. © 2005 IEEE.published_or_final_versio

    Packet Transactions: High-level Programming for Line-Rate Switches

    Full text link
    Many algorithms for congestion control, scheduling, network measurement, active queue management, security, and load balancing require custom processing of packets as they traverse the data plane of a network switch. To run at line rate, these data-plane algorithms must be in hardware. With today's switch hardware, algorithms cannot be changed, nor new algorithms installed, after a switch has been built. This paper shows how to program data-plane algorithms in a high-level language and compile those programs into low-level microcode that can run on emerging programmable line-rate switching chipsets. The key challenge is that these algorithms create and modify algorithmic state. The key idea to achieve line-rate programmability for stateful algorithms is the notion of a packet transaction : a sequential code block that is atomic and isolated from other such code blocks. We have developed this idea in Domino, a C-like imperative language to express data-plane algorithms. We show with many examples that Domino provides a convenient and natural way to express sophisticated data-plane algorithms, and show that these algorithms can be run at line rate with modest estimated die-area overhead.Comment: 16 page
    • …
    corecore