430 research outputs found

    Reducing Transport Latency for Short Flows with Multipath TCP

    Get PDF
    Multipath TCP (MPTCP) has been an emerging transport protocol that provides network resilience to failures and improves throughput by splitting a data stream into multiple subflows across all the available multiple paths. While MPTCP is generally beneficial for throughput-sensitive large flows with large number of subflows, it may be harmful for latency-sensitive small flows. MPTCP assigns each subflow a congestion window, making short flows susceptible to timeout when a flow only contains a few packets. This condition becomes even worse when the paths have heterogeneous characteristics as packet reordering occurs and the slow paths can be used with MPTCP, causing the increased end-to-end delay and the lower application Goodput. Thus, it is important to choose the appropriate subflows for each MPTCP connection to achieve the good performance. However, the subflows in MPTCP are determined before a connection is established, and they usually remain unchanged during the lifetime of that connection. To address this issue, we propose DMPTCP, which dynamically adjusts the subflows according to application workloads. Specifically, DMPTCP first utilizes the idea of TCP modeling to estimate the latency on the path under scheduling and the data amount sent on the other paths simultaneously, and then decides the set of subflows to be used for certain application periodically with the goal of reducing completion time for short flows and achieving a higher throughput for long flows. We implement DMPTCP in a Linux server and conduct extensive experiments both in NS3 and in Linux testbed to validate its effectiveness. Our evaluation shows that DMPTCP decreases the completion time by over 46.55% compared to conventional MPTCP for short flows while increases the Goodput up to 21.3% for long-lived flows

    Reducing Transport Latency for Short Flows with Multipath TCP

    Get PDF
    Multipath TCP (MPTCP) has been an emerging transport protocol that provides network resilience to failures and improves throughput by splitting a data stream into multiple subflows across all the available multiple paths. While MPTCP is generally beneficial for throughput-sensitive large flows with large number of subflows, it may be harmful for latency-sensitive small flows. MPTCP assigns each subflow a congestion window, making short flows susceptible to timeout when a flow only contains a few packets. This condition becomes even worse when the paths have heterogeneous characteristics as packet reordering occurs and the slow paths can be used with MPTCP, causing the increased end-to-end delay and the lower application Goodput. Thus, it is important to choose the appropriate subflows for each MPTCP connection to achieve the good performance. However, the subflows in MPTCP are determined before a connection is established, and they usually remain unchanged during the lifetime of that connection. To address this issue, we propose DMPTCP, which dynamically adjusts the subflows according to application workloads. Specifically, DMPTCP first utilizes the idea of TCP modeling to estimate the latency on the path under scheduling and the data amount sent on the other paths simultaneously, and then decides the set of subflows to be used for certain application periodically with the goal of reducing completion time for short flows and achieving a higher throughput for long flows. We implement DMPTCP in a Linux server and conduct extensive experiments both in NS3 and in Linux testbed to validate its effectiveness. Our evaluation shows that DMPTCP decreases the completion time by over 46.55% compared to conventional MPTCP for short flows while increases the Goodput up to 21.3% for long-lived flows

    Packet Reordering Response for MPTCP under Wireless Heterogeneous Environment

    Get PDF

    MMPTCP: a multipath transport protocol for data centers

    Get PDF
    Modern data centres provide large aggregate network capacity and multiple paths among servers. Traffic is very diverse; most of the data is produced by long, bandwidth hungry flows but the large majority of flows, which commonly come with strict deadlines regarding their completion time, are short. It has been shown that TCP is not efficient for any of these types of traffic in modern data centres. More recent protocols such MultiPath TCP (MPTCP) are very efficient for long flows, but are ill-suited for short flows. In this paper, we present MMPTCP, a novel transport protocol which, compared to TCP and MPTCP, reduces short flows' completion times, while providing excellent goodput to long flows. MMPTCP runs in two phases; initially, it randomly scatters packets in the network under a single congestion window exploiting all available paths. This is beneficial to latency-sensitive flows. After a specific amount of data is sent, MMPTCP switches to a regular MPTCP mode. MMPTCP is incrementally deployable in existing data centres as it does not require any modifications outside the transport layer and behaves well when competing with legacy TCP and MPTCP flows. Our extensive experimental evaluation shows that all design objectives for MMPTCP are met

    Re-architecting datacenter networks and stacks for low latency and high performance

    Get PDF
    © 2017 ACM. Modern datacenter networks provide very high capacity via redundant Clos topologies and low switch latency, but transport protocols rarely deliver matching performance. We present NDP, a novel datacenter transport architecture that achieves near-optimal completion times for short transfers and high flow throughput in a wide range of scenarios, including incast. NDP switch buffers are very shallow and when they fill the switches trim packets to headers and priority forward the headers. This gives receivers a full view of instantaneous demand from all senders, and is the basis for our novel, high-performance, multipath-aware transport protocol that can deal gracefully with massive incast events and prioritize traffic from different senders on RTT timescales. We implemented NDP in Linux hosts with DPDK, in a software switch, in a NetFPGA-based hardware switch, and in P4. We evaluate NDP's performance in our implementations and in large-scale simulations, simultaneously demonstrating support for very low-latency and high throughput.This work was partly funded by the SSICLOPS H2020 project (644866)

    Improving Network Performance Through Endpoint Diagnosis And Multipath Communications

    Get PDF
    Components of networks, and by extension the internet can fail. It is, therefore, important to find the points of failure and resolve existing issues as quickly as possible. Resolution, however, takes time and its important to maintain high quality of service (QoS) for existing clients while it is in progress. In this work, our goal is to provide clients with means of avoiding failures if/when possible to maintain high QoS while enabling them to assist in the diagnosis process to speed up the time to recovery. Fixing failures relies on first detecting that there is one and then identifying where it occurred so as to be able to remedy it. We take a two-step approach in our solution. First, we identify the entity (Client, Server, Network) responsible for the failure. Next, if a failure is identified as network related additional algorithms are triggered to detect the device responsible. To achieve the first step, we revisit the question: how much can you infer about a failure using TCP statistics collected at one of the endpoints in a connection? Using an agent that captures TCP statistics at one of the end points we devise a classification algorithm that identifies the root cause of failures. Using insights derived from this classification algorithm we identify dominant TCP metrics that indicate where/why problems occur. If/when a failure is identified as a network related problem, the second step is triggered, where the algorithm uses additional information that is collected from ``failed\u27\u27 connections to identify the device which resulted in the failure. Failures are also disruptive to user\u27s performance. Resolution may take time. Therefore, it is important to be able to shield clients from their effects as much as possible. One option for avoiding problems resulting from failures is to rely on multiple paths (they are unlikely to go bad at the same time). The use of multiple paths involves both selecting paths (routing) and using them effectively. The second part of this thesis explores the efficacy of multipath communication in such situations. It is expected that multi-path communications have monetary implications for the ISP\u27s and content providers. Our solution, therefore, aims to minimize such costs to the content providers while significantly improving user performance
    • …
    corecore