477 research outputs found

    Endpoint-transparent Multipath Transport with Software-defined Networks

    Full text link
    Multipath forwarding consists of using multiple paths simultaneously to transport data over the network. While most such techniques require endpoint modifications, we investigate how multipath forwarding can be done inside the network, transparently to endpoint hosts. With such a network-centric approach, packet reordering becomes a critical issue as it may cause critical performance degradation. We present a Software Defined Network architecture which automatically sets up multipath forwarding, including solutions for reordering and performance improvement, both at the sending side through multipath scheduling algorithms, and the receiver side, by resequencing out-of-order packets in a dedicated in-network buffer. We implemented a prototype with commonly available technology and evaluated it in both emulated and real networks. Our results show consistent throughput improvements, thanks to the use of aggregated path capacity. We give comparisons to Multipath TCP, where we show our approach can achieve a similar performance while offering the advantage of endpoint transparency

    A resequencing model for high speed networks

    Get PDF
    In this paper, we propose a framework to study the resequencing mechanism in high speed networks. This framework allows us to estimate the packet resequencing delay, the total packet delay, and the resequencing buffer occupancy distributions when data traffic is dispersed on multiple disjoint paths. In contrast to most of the existing work, the estimation of the end-to-end path delay distribution is decoupled from the queueing model for resequencing. This leads to a simple yet general model, which can be used with other measurement-based tools for estimating the end-to-end path delay distribution to find an optimal split of traffic. We consider a multiple-node M/M/1 tandem network as a path model. When end-to-end path delays are Gaussian distributed, our results show that the packet resequencing delay, the total packet delay, and the resequencing buffer occupancy drop when the traffic is spread over a larger number of homogeneous paths, although the network performance improvement quickly saturates when the number of paths used increases. We find that the number of paths used in multipath routing should be small, say up to three. Besides, an optimal split of traffic occurs at paths with equal loads.published_or_final_versio

    Flush communication channels: Effective implementation and verification

    Get PDF
    Flush communication channels, or F-channels, generalize more conventional asynchronous communication paradigms. A distributed system which uses an F-channel allows a programmer to define the delivery order of each message in relation to other messages transmitted on the channel. Unreliable datagrams and FIFO (first-in-first-out) communication channels have strictly defined delivery semantics. No restrictions are allowed on message delivery order with unreliable datagrams--message delivery is completely unordered. FIFO channels, on the other hand, insist messages are delivered in the order of their transmission. Flush channels can provide either of these delivery order semantics; in addition, F-channels allow the user to define the delivery of a message to be after the delivery of all messages previously transmitted or before the delivery of all messages subsequently transmitted or both. A system which communicates with a flush channel has a message delivery order that is a partial order.;Dynamically specifying a partial message delivery order complicates many aspects of how we implement and reason about the communication channel. From the system\u27s perspective, we develop a feasible implementation protocol and prove its correctness. The protocol effectively handles the partially ordered message delivery. From the user\u27s perspective, we derive an axiomatic verification methodology for flush applications. The added flexibility of defining the delivery order dynamically slightly increases the complexity for the application programmer. Our verification work helps the user effectively deal with the partially ordered message delivery in flush communication

    Delivering Consistent Network Performance in Multi-tenant Data Centers

    Get PDF
    Data centers are growing rapidly in size and have recently begun acquiring a new role as cloud hosting platforms, allowing outside developers to deploy their own applications on large scales. As a result, today\u27s data centers are multi-tenant environments that host an increasingly diverse set of applications, many of which have very demanding networking requirements. This has prompted research into new data center architectures that offer increased capacity by using topologies that introduce multiple paths between servers. To achieve consistent network performance in these networks, traffic must be effectively load balanced among the available paths. In addition, some form of system-wide traffic regulation is necessary to provide performance guarantees to tenants. To address these issues, this thesis introduces several software-based mechanisms that were inspired by techniques used to regulate traffic in the interconnects of scalable Internet routers. In particular, we borrow two key concepts that serve as the basis for our approach. First, we investigate packet-level routing techniques that are similar to those used to balance load effectively in routers. This work is novel in the data center context because most existing approaches route traffic at the level of flows to prevent their packets from arriving out-of-order. We show that routing at the packet-level allows for far more efficient use of the network\u27s resources and we provide a novel resequencing scheme to deal with out-of-order arrivals. Secondly, we introduce distributed scheduling as a means to engineer traffic in data centers. In routers, distributed scheduling controls the rates between ports on different line cards enabling traffic to move efficiently through the interconnect. We apply the same basic idea to schedule rates between servers in the data center. We show that scheduling can prevent congestion from occurring and can be used as a flexible mechanism to support network performance guarantees for tenants. In contrast to previous work, which relied on centralized controllers to schedule traffic, our approach is fully distributed and we provide a novel distributed algorithm to control rates. In addition, we introduce an optimization problem called backlog scheduling to study scheduling strategies that facilitate more efficient application execution

    Functional safety specification of communication profile PROFIsafe

    Get PDF
    Paper maps the trends in area of safety-related communication within PROFIBUS and PROFINET industry networks. There are analyses safety measures and Fail-safe parameters of PROFIsafe profile in version V2 and their localisation in Safety Communication Layer SCL, which guarantees Safety Integrity Level SIL according to standard IEC 61508. The last chapter analyses the reaction in the event of fault during transmission of messages

    Flush communication channels: Effective implementation and verification

    Get PDF
    Flush communication channels, or F-channels, generalize more conventional asynchronous communication paradigms. A distributed system which uses an F-channel allows a programmer to define the delivery order of each message in relation to other messages transmitted on the channel. Unreliable datagrams and FIFO (first-in-first-out) communication channels have strictly defined delivery semantics. No restrictions are allowed on message delivery order with unreliable datagrams--message delivery is completely unordered. FIFO channels, on the other hand, insist messages are delivered in the order of their transmission. Flush channels can provide either of these delivery order semantics; in addition, F-channels allow the user to define the delivery of a message to be after the delivery of all messages previously transmitted or before the delivery of all messages subsequently transmitted or both. A system which communicates with a flush channel has a message delivery order that is a partial order.;Dynamically specifying a partial message delivery order complicates many aspects of how we implement and reason about the communication channel. From the system\u27s perspective, we develop a feasible implementation protocol and prove its correctness. The protocol effectively handles the partially ordered message delivery. From the user\u27s perspective, we derive an axiomatic verification methodology for flush applications. The added flexibility of defining the delivery order dynamically slightly increases the complexity for the application programmer. Our verification work helps the user effectively deal with the partially ordered message delivery in flush communication

    Sojourn times in queueing networks

    Get PDF
    corecore