1,875 research outputs found
Datacenter Traffic Control: Understanding Techniques and Trade-offs
Datacenters provide cost-effective and flexible access to scalable compute
and storage resources necessary for today's cloud computing needs. A typical
datacenter is made up of thousands of servers connected with a large network
and usually managed by one operator. To provide quality access to the variety
of applications and services hosted on datacenters and maximize performance, it
deems necessary to use datacenter networks effectively and efficiently.
Datacenter traffic is often a mix of several classes with different priorities
and requirements. This includes user-generated interactive traffic, traffic
with deadlines, and long-running traffic. To this end, custom transport
protocols and traffic management techniques have been developed to improve
datacenter network performance.
In this tutorial paper, we review the general architecture of datacenter
networks, various topologies proposed for them, their traffic properties,
general traffic control challenges in datacenters and general traffic control
objectives. The purpose of this paper is to bring out the important
characteristics of traffic control in datacenters and not to survey all
existing solutions (as it is virtually impossible due to massive body of
existing research). We hope to provide readers with a wide range of options and
factors while considering a variety of traffic control mechanisms. We discuss
various characteristics of datacenter traffic control including management
schemes, transmission control, traffic shaping, prioritization, load balancing,
multipathing, and traffic scheduling. Next, we point to several open challenges
as well as new and interesting networking paradigms. At the end of this paper,
we briefly review inter-datacenter networks that connect geographically
dispersed datacenters which have been receiving increasing attention recently
and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial
Multi-Cell, Multi-Channel Scheduling with Probabilistic Per-Packet Real-Time Guarantee
For mission-critical sensing and control applications such as those to be
enabled by 5G Ultra-Reliable, Low-Latency Communications (URLLC), it is
critical to ensure the communication quality of individual packets.
Prior studies have considered Probabilistic Per-packet Real-time
Communications (PPRC) guarantees for single-cell, single-channel networks with
implicit deadline constraints, but they have not considered real-world
complexities such as inter-cell interference and multiple communication
channels.
Towards ensuring PPRC in multi-cell, multi-channel wireless networks, we
propose a real-time scheduling algorithm based on
\emph{local-deadline-partition (LDP)}. The LDP algorithm is suitable for
distributed implementation, and it ensures probabilistic per-packet real-time
guarantee for multi-cell, multi-channel networks with general deadline
constraints. We also address the associated challenge of the schedulability
test of PPRC traffic. In particular, we propose the concept of \emph{feasible
set} and identify a closed-form sufficient condition for the schedulability of
PPRC traffic.
We propose a distributed algorithm for the schedulability test, and the
algorithm includes a procedure for finding the minimum sum work density of
feasible sets which is of interest by itself. We also identify a necessary
condition for the schedulability of PPRC traffic, and use numerical studies to
understand a lower bound on the approximation ratio of the LDP algorithm.
We experimentally study the properties of the LDP algorithm and observe that
the PPRC traffic supportable by the LDP algorithm is significantly higher than
that of a state-of-the-art algorithm
Maintaining flow isolation in work-conserving flow aggregation
Abstract — In order to improve the scalability of scheduling protocols with bounded end-to-end delay, much effort has focused on reducing the amount of per-flow state at routers. One technique to reduce this state is flow aggregation, in which multiple individual flows are aggregated into a single aggregate flow. In addition to reducing per-flow state, flow aggregation has the advantage of a per-hop delay that is inversely proportional to the rate of the aggregate flow, while in the case of no aggregation, the per-hop delay is inversely proportional to the (smaller) rate of the individual flow. Flow aggregation in general is non-work-conserving. Recently, a work-conserving flow aggregation technique has been proposed. However, it has the disadvantage that the end-to-end delay of an individual flow is related to the burstiness of other flows sharing its aggregate flow. Here, we show how work-conserving flow aggregation may be performed without this drawback, that is, the end-to-end delay of an individual flow is independent of the burstiness of other flows. I
JiTS: Just-in-Time Scheduling for Real-Time Sensor Data Dissemination
We consider the problem of real-time data dissemination in wireless sensor
networks, in which data are associated with deadlines and it is desired for
data to reach the sink(s) by their deadlines. To this end, existing real-time
data dissemination work have developed packet scheduling schemes that
prioritize packets according to their deadlines. In this paper, we first
demonstrate that not only the scheduling discipline but also the routing
protocol has a significant impact on the success of real-time sensor data
dissemination. We show that the shortest path routing using the minimum number
of hops leads to considerably better performance than Geographical Forwarding,
which has often been used in existing real-time data dissemination work. We
also observe that packet prioritization by itself is not enough for real-time
data dissemination, since many high priority packets may simultaneously contend
for network resources, deteriorating the network performance. Instead,
real-time packets could be judiciously delayed to avoid severe contention as
long as their deadlines can be met. Based on this observation, we propose a
Just-in-Time Scheduling (JiTS) algorithm for scheduling data transmissions to
alleviate the shortcomings of the existing solutions. We explore several
policies for non-uniformly delaying data at different intermediate nodes to
account for the higher expected contention as the packet gets closer to the
sink(s). By an extensive simulation study, we demonstrate that JiTS can
significantly improve the deadline miss ratio and packet drop ratio compared to
existing approaches in various situations. Notably, JiTS improves the
performance requiring neither lower layer support nor synchronization among the
sensor nodes
- …