111 research outputs found
Datacenter Traffic Control: Understanding Techniques and Trade-offs
Datacenters provide cost-effective and flexible access to scalable compute
and storage resources necessary for today's cloud computing needs. A typical
datacenter is made up of thousands of servers connected with a large network
and usually managed by one operator. To provide quality access to the variety
of applications and services hosted on datacenters and maximize performance, it
deems necessary to use datacenter networks effectively and efficiently.
Datacenter traffic is often a mix of several classes with different priorities
and requirements. This includes user-generated interactive traffic, traffic
with deadlines, and long-running traffic. To this end, custom transport
protocols and traffic management techniques have been developed to improve
datacenter network performance.
In this tutorial paper, we review the general architecture of datacenter
networks, various topologies proposed for them, their traffic properties,
general traffic control challenges in datacenters and general traffic control
objectives. The purpose of this paper is to bring out the important
characteristics of traffic control in datacenters and not to survey all
existing solutions (as it is virtually impossible due to massive body of
existing research). We hope to provide readers with a wide range of options and
factors while considering a variety of traffic control mechanisms. We discuss
various characteristics of datacenter traffic control including management
schemes, transmission control, traffic shaping, prioritization, load balancing,
multipathing, and traffic scheduling. Next, we point to several open challenges
as well as new and interesting networking paradigms. At the end of this paper,
we briefly review inter-datacenter networks that connect geographically
dispersed datacenters which have been receiving increasing attention recently
and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial
ATP: a Datacenter Approximate Transmission Protocol
Many datacenter applications such as machine learning and streaming systems
do not need the complete set of data to perform their computation. Current
approximate applications in datacenters run on a reliable network layer like
TCP. To improve performance, they either let sender select a subset of data and
transmit them to the receiver or transmit all the data and let receiver drop
some of them. These approaches are network oblivious and unnecessarily transmit
more data, affecting both application runtime and network bandwidth usage. On
the other hand, running approximate application on a lossy network with UDP
cannot guarantee the accuracy of application computation. We propose to run
approximate applications on a lossy network and to allow packet loss in a
controlled manner. Specifically, we designed a new network protocol called
Approximate Transmission Protocol, or ATP, for datacenter approximate
applications. ATP opportunistically exploits available network bandwidth as
much as possible, while performing a loss-based rate control algorithm to avoid
bandwidth waste and re-transmission. It also ensures bandwidth fair sharing
across flows and improves accurate applications' performance by leaving more
switch buffer space to accurate flows. We evaluated ATP with both simulation
and real implementation using two macro-benchmarks and two real applications,
Apache Kafka and Flink. Our evaluation results show that ATP reduces
application runtime by 13.9% to 74.6% compared to a TCP-based solution that
drops packets at sender, and it improves accuracy by up to 94.0% compared to
UDP
๋ฐ์ดํฐ ์ผํฐ ๋ด์ ๋ค์ค๊ฒฝ๋ก ์ ์ก์ ์ํ ๋์ ๋ถํ ๊ท ํ ๊ธฐ๋ฒ
ํ์๋
ผ๋ฌธ (์์ฌ)-- ์์ธ๋ํ๊ต ๋ํ์ : ๊ณต๊ณผ๋ํ ์ปดํจํฐ๊ณตํ๋ถ, 2019. 2. ๊ถํ๊ฒฝ .Various applications require the data center networks to carry their traffic efficiently. The data center networks usually have a hierarchical topology and exhibit distinct traffic patterns, which is different from the traditional Internet. These features have driven the data center networks to reduce the flow completion time (FCT) and to achieve high throughput. One of the possible solutions is balancing network loads across multiple paths by leveraging transport mechanisms like Equal-Cost MultiPath (ECMP) routing. ECMP allows flows to exploit multiple paths by hashing the metadata of the flows. However, due to the random nature of hash functions, ECMP often distributes the traffic unevenly, which makes it hard to utilize the links' full capacity. Thus, we propose an adaptive load balancing mechanism for multiple paths in data centers, called MaxPass, to complement ECMP. A sender adaptively selects and dynamically changes multiple paths depending on the current network status like congestion. To monitor the network status, the corresponding receiver transmits a probe packet periodically to the senderits loss indicates a traffic congestion. We implemented MaxPass using commodity switches and carry out the quantitative analysis on the ns-2 simulator to show that MaxPass can improve the FCT and the throughput.๋ฐ์ดํฐ ์ผํฐ ๋ด์์ ๋์ํ๋ ๋ค์ํ ์ดํ๋ฆฌ์ผ์ด์
๋ค์ ๋คํธ์ํฌ ํธ๋ํฝ์ ๋ณด๋ค ํจ์จ์ ์ผ๋ก ์ฌ์ฉํ ๊ฒ์ ์๊ตฌํ๋ค. ๋ฐ์ดํฐ ์ผํฐ ๋คํธ์ํฌ๋ ๊ธฐ์กด์ ์ธํฐ๋ท๊ณผ๋ ๋ค๋ฅธ ๋ค์ค ๋ฃจํธ ๊ณ์ธต์ ํ ํด๋ก์ง๋ก ๊ตฌ์ฑ๋์ด ์์ผ๋ฉฐ, ๋ค์ํ ํธ๋ํฝ ํจํด์ ๊ฐ์ง๊ณ ์๋ค. ์ด๋ฌํ ํน์ง์ผ๋ก ์ธํด ๋ฐ์ดํฐ ์ผํฐ ๋คํธ์ํฌ๋ ์งง์ ํ๋ก์ฐ ์ฒ๋ฆฌ ์๋ฃ ์๊ฐ (Flow Completion Time)๊ณผ ๋์ ์ฒ๋ฆฌ๋ (Throughput)์ ์๊ตฌํ๋ค. ๋ฐ์ดํฐ ์ผํฐ๊ฐ ์๊ตฌํ๋ ์กฐ๊ฑด๋ค์ ๋ง์กฑ์ํค๊ธฐ ์ํ ๋ฐฉ๋ฒ ์ค ํ๋๋ก๋ ๋ฑ๊ฐ ๋ค์ค ๊ฒฝ๋ก (Equal-Cost Multi-Path)์ ๊ฐ์ ๋ผ์ฐํ
๊ธฐ๋ฒ์ ํ์ฉํ์ฌ ๋คํธ์ํฌ ๋ถํ๋ฅผ ์๋ก ๋ค๋ฅธ ๋งํฌ์ ๋ถ์ฐ์ํค๋ ๊ฒ์ด ์๋ค. ๋ฑ๊ฐ ๋ค์ค ๊ฒฝ๋ก ๋ผ์ฐํ
๊ธฐ๋ฒ์ ํ๋ก์ฐ์ ๋ฉํ ๋ฐ์ดํฐ๋ฅผ ํด์ฑํ์ฌ ํ๋ก์ฐ๊ฐ ์ฌ๋ฌ ๊ฒฝ๋ก๋ฅผ ์ด์ฉํ ์ ์๋๋ก ํ๋ค. ๊ทธ๋ฌ๋ ๋๋คํ ๊ฒฐ๊ณผ๋ฅผ ๋์ถํ๋ ํด์ ํจ์์ ํน์ฑ์ ๋ฐ๋ผ ๋ฑ๊ฐ ๋ค์ค ๊ฒฝ๋ก ๋ผ์ฐํ
๊ธฐ๋ฒ์ ์ข
์ข
ํธ๋ํฝ์ ๊ณ ๋ฅด๊ฒ ๋ถ๋ฐฐํ์ง ๋ชปํจ์ผ๋ก์จ, ๋งํฌ์ ์ ์ฒด ์ฉ๋์ ํ์ฉํ๊ธฐ์๋ ํ๊ณ๊ฐ ์๋ค. ๋ฐ๋ผ์ ๋ณธ ๋
ผ๋ฌธ์์๋ ๋ฐ์ดํฐ ์ผํฐ ๋คํธ์ํฌ ํ๊ฒฝ์ ๋ง๋ ์๋ก์ด ๋ถํ ๊ท ํ ๋ฐฐ๋ถ ๊ธฐ๋ฒ์ธ ๋งฅ์คํจ์ค (MaxPass)๋ฅผ ์ ์ํ๋ค. ๋งฅ์คํจ์ค ๋ด์์ ๋ฐ์ดํฐ ์ก์ ์๋ ํ์ฌ ๋คํธ์ํฌ ์ํ์ ๋ฐ๋ผ ๊ฒฝ๋ก๋ฅผ ๋์ ์ผ๋ก ์ ํํ๊ณ ๋ณ๊ฒฝํ๋ค. ๋ฐ์ดํฐ ์์ ์๋ ํ์ฌ ๋คํธ์ํฌ ์ํ๋ฅผ ํ์
ํ๊ธฐ ์ํด ํ์ ํจํท์ ์ฃผ๊ธฐ์ ์ผ๋ก ์ก์ ์์๊ฒ ๋ณด๋ด๊ณ , ํ์ ํจํท ๋๋ ์ฌ๋ถ์ ๋ฐ๋ผ ํผ์ก๋๋ฅผ ํ์
ํ๋ค. ๋ณธ ๋
ผ๋ฌธ์ ์ค์ ์ค์์น์์ ๋งฅ์คํจ์ค๋ฅผ ๊ตฌํํ์์ผ๋ฉฐ, ns-2 ์๋ฎฌ๋ ์ด์
์ ๊ธฐ๋ฐํ ์คํ์ ํตํด ์ ์ํ ๊ธฐ๋ฒ์ ๊ดํ์ฌ ์ ๋์ ์์น ๋ถ์์ ์ํํ๊ณ , ํ๋ก์ฐ ์ฒ๋ฆฌ ์๋ฃ ์๊ฐ๊ณผ ๋งํฌ ์ฒ๋ฆฌ๋์ ์ฑ๋ฅ ํฅ์์ด ์์์ ๋ณด์ฌ ์ค๋ค.Chapter 1 Introduction 1
Chapter 2 Background 5
2.1 Data Center Network Topology . . . . . . . . . . . . . . . . . 5
2.2 Multipath Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Multipath Transport Protocol . . . . . . . . . . . . . . . . . . . 7
2.4 Credit-based Congestion Control . . . . . . . . . . . . . . . . 8
Chapter 3 MaxPass 10
3.1 Design Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2 Switch Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.3 Path Probing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.4 Adaptive Path Selection . . . . . . . . . . . . . . . . . . . . . . .15
3.5 Feedback Control Algorithm . . . . . . . . . . . . . . . . . . . 15
3.6 Credit Stop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17
Chapter 4 Evaluation 20
4.1 Ns-2 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.1.1 Load Balancing . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.1.2 Throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23
4.1.3 Flow Completion Time (FCT) . . . . . . . . . . . . . . . .25
4.2 Testbed Experiments . . . . . . . . . . . . . . . . . . . . . . . . . 25
Chapter 5 Related Work 28
5.1 Centralized . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .28
5.2 Decentralized/Distributed . . . . . . . . . . . . . . . . . . . . . .30
Chapter 6 Conclusion 32
์ด๋ก 38Maste
Fastpass: A Centralized โZero-Queueโ Datacenter Network
An ideal datacenter network should provide several properties, including low median and tail latency, high utilization (throughput), fair allocation of network resources between users or applications, deadline-aware scheduling, and congestion (loss) avoidance. Current datacenter networks inherit the principles that went into the design of the Internet, where packet transmission and path selection decisions are distributed among the endpoints and routers. Instead, we propose that each sender should delegate controlโto a centralized arbiterโof when each packet should be transmitted and what path it should follow. This paper describes Fastpass, a datacenter network architecture built using this principle. Fastpass incorporates two fast algorithms: the first determines the time at which each packet should be transmitted, while the second determines the path to use for that packet. In addition, Fastpass uses an efficient protocol between the endpoints and the arbiter and an arbiter replication strategy for fault-tolerant failover. We deployed and evaluated Fastpass in a portion of Facebookโs datacenter network. Our results show that Fastpass achieves high throughput comparable to current networks at a 240 reduction is queue lengths (4.35 Mbytes reducing to 18 Kbytes), achieves much fairer and consistent flow throughputs than the baseline TCP (5200 reduction in the standard deviation of per-flow throughput with five concurrent connections), scalability from 1 to 8 cores in the arbiter implementation with the ability to schedule 2.21 Terabits/s of traffic in software on eight cores, and a 2.5 reduction in the number of TCP retransmissions in a latency-sensitive service at Facebook.National Science Foundation (U.S.) (grant IIS-1065219)Irwin Mark Jacobs and Joan Klein Jacobs Presidential FellowshipHertz Foundation (Fellowship
Homa: A Receiver-Driven Low-Latency Transport Protocol Using Network Priorities (Complete Version)
Homa is a new transport protocol for datacenter networks. It provides
exceptionally low latency, especially for workloads with a high volume of very
short messages, and it also supports large messages and high network
utilization. Homa uses in-network priority queues to ensure low latency for
short messages; priority allocation is managed dynamically by each receiver and
integrated with a receiver-driven flow control mechanism. Homa also uses
controlled overcommitment of receiver downlinks to ensure efficient bandwidth
utilization at high load. Our implementation of Homa delivers 99th percentile
round-trip times less than 15{\mu}s for short messages on a 10 Gbps network
running at 80% load. These latencies are almost 100x lower than the best
published measurements of an implementation. In simulations, Homa's latency is
roughly equal to pFabric and significantly better than pHost, PIAS, and NDP for
almost all message sizes and workloads. Homa can also sustain higher network
loads than pFabric, pHost, or PIAS.Comment: This paper is an extended version of the paper on Homa that was
published in ACM SIGCOMM 2018. Material had to be removed from Sections 5.1
and 5.2 to meet the SIGCOMM page restrictions; this version restores the
missing material. This paper is 18 pages, plus two pages of reference
- โฆ