61 research outputs found
RCD: Rapid Close to Deadline Scheduling for Datacenter Networks
Datacenter-based Cloud Computing services provide a flexible, scalable and
yet economical infrastructure to host online services such as multimedia
streaming, email and bulk storage. Many such services perform geo-replication
to provide necessary quality of service and reliability to users resulting in
frequent large inter- datacenter transfers. In order to meet tenant service
level agreements (SLAs), these transfers have to be completed prior to a
deadline. In addition, WAN resources are quite scarce and costly, meaning they
should be fully utilized. Several recently proposed schemes, such as B4,
TEMPUS, and SWAN have focused on improving the utilization of inter-datacenter
transfers through centralized scheduling, however, they fail to provide a
mechanism to guarantee that admitted requests meet their deadlines. Also, in a
recent study, authors propose Amoeba, a system that allows tenants to define
deadlines and guarantees that the specified deadlines are met, however, to
admit new traffic, the proposed system has to modify the allocation of already
admitted transfers. In this paper, we propose Rapid Close to Deadline
Scheduling (RCD), a close to deadline traffic allocation technique that is fast
and efficient. Through simulations, we show that RCD is up to 15 times faster
than Amoeba, provides high link utilization along with deadline guarantees, and
is able to make quick decisions on whether a new request can be fully satisfied
before its deadline.Comment: World Automation Congress (WAC), IEEE, 201
Short vs. long flows: a battle that both can win
In this paper, we introduce MMPTCP, a hybrid transport protocol which aims at unifying the way data is transported in data centres. MMPTCP runs in two phases; initially, it randomly scatters packets in the network under a single congestion window exploiting all available paths. This is beneficial to latency-sensitive flows. During the second phase, MMPTCP runs in Multi-Path TCP mode, which has been shown to be very efficient for long flows. Initial evaluation shows that our approach significantly improves short flow completion times while providing high throughput for long flows and high overall network utilisation
Recommended from our members
Fair packet enqueueing and marking in multi-queue datacenter networks
Recently, Explicit Congestion Notification (ECN) has been leveraged by most Datacenter Network (DCN) protocols for congestion control to achieve high throughput and low latency. However, the majority of these approaches assume that each switch port has one queue while current industry trends towards having multiple queues per switch port. To this end, we propose ML-ECN, a fairness-aware packet enqueueing and multi-level probabilistic ECN marking scheme for DCNs enabled with multiple-service, multiple-queue switch ports. The main design of ML-ECN relies on the separation between small, medium, and large flows by dedicating multiple queues for each flow class to ensure fair enqueueing. ML-ECN employs one ECN marking threshold for the small queue class and multiple thresholds with a probabilistic marking for the medium and large queue classes to achieve low latency for mice (small) and high throughput for elephant (large) flows. In addition, ML-ECN performs fairness-aware ECN marking that ensures that packets of short flows are not getting marked due to buffer buildups caused by longer flows. Large-scale ns-2 simulations show that ML-ECN outperforms existing approaches at different performance metrics
Re-architecting datacenter networks and stacks for low latency and high performance
© 2017 ACM. Modern datacenter networks provide very high capacity via redundant Clos topologies and low switch latency, but transport protocols rarely deliver matching performance. We present NDP, a novel datacenter transport architecture that achieves near-optimal completion times for short transfers and high flow throughput in a wide range of scenarios, including incast. NDP switch buffers are very shallow and when they fill the switches trim packets to headers and priority forward the headers. This gives receivers a full view of instantaneous demand from all senders, and is the basis for our novel, high-performance, multipath-aware transport protocol that can deal gracefully with massive incast events and prioritize traffic from different senders on RTT timescales. We implemented NDP in Linux hosts with DPDK, in a software switch, in a NetFPGA-based hardware switch, and in P4. We evaluate NDP's performance in our implementations and in large-scale simulations, simultaneously demonstrating support for very low-latency and high throughput.This work was partly funded by the SSICLOPS H2020 project (644866)
- …