25 research outputs found
RepFlow: Minimizing Flow Completion Times with Replicated Flows in Data Centers
Short TCP flows that are critical for many interactive applications in data
centers are plagued by large flows and head-of-line blocking in switches.
Hash-based load balancing schemes such as ECMP aggravate the matter and result
in long-tailed flow completion times (FCT). Previous work on reducing FCT
usually requires custom switch hardware and/or protocol changes. We propose
RepFlow, a simple yet practically effective approach that replicates each short
flow to reduce the completion times, without any change to switches or host
kernels. With ECMP the original and replicated flows traverse distinct paths
with different congestion levels, thereby reducing the probability of having
long queueing delay. We develop a simple analytical model to demonstrate the
potential improvement of RepFlow. Extensive NS-3 simulations and Mininet
implementation show that RepFlow provides 50%--70% speedup in both mean and
99-th percentile FCT for all loads, and offers near-optimal FCT when used with
DCTCP.Comment: To appear in IEEE INFOCOM 201
An INTโbased packet loss monitoring system for data center networks implementing FineโGrained MultiโPath routing
In-band network telemetry (INT) is a newer network measurement technology that uses normal data packets to collect network information hop-by-hop with low overhead. Since incomplete telemetry data seriously degrades the performance of upper-layer network telemetry applications, it is necessary to consider the own INT packet loss. In response, LossSight, a powerful packet loss monitoring system for INT has been designed, implemented, and made available as open-source. This letter extends the previous work by proposing, implementing, and evaluating LB-LossSight, an improved version compatible with packet-level load-balancing techniques, which are currently used in modern Data Center Networks. Experimental results in a Clos network, one of the most commonly used topologies in today's data centers, confirm the high detection and localization accuracy of the implemented solution.Spanish State Research Agency (AEI), under project grant AriSe2: FINe, (Ref.PID2020-116329GB-C22 founded by MCIN/AEI/10.13039/501100011033) and the project (Natural Science Foundation of Shandong Province under Grant No. ZR2020LZH010)
An INT-based packet loss monitoring system for data center networks implementing Fine-Grained Multi-Path routing
In-band network telemetry (INT) is a newer network measurement technology that uses normal data packets to collect network information hop-by-hop with low overhead. Since incomplete telemetry data seriously degrades the performance of upper-layer network telemetry applications, it is necessary to consider the own INT packet loss. In response, LossSight, a powerful packet loss monitoring system for INT has been designed, implemented, and made available as open-source. This letter extends the previous work by proposing, implementing, and evaluating LB-LossSight, an improved version compatible with packet-level load-balancing techniques, which are currently used in modern Data Center Networks. Experimental results in a Clos network, one of the most commonly used topologies in today's data centers, confirm the high detection and localization accuracy of the implemented solution.Spanish State Research Agency (AEI), under project grant AriSe2: FINe, (Ref.PID2020-116329GB-C22 founded by
MCIN/AEI/10.13039/501100011033) and the project (Natural Science Foundation of Shandong Province under Grant No.
ZR2020LZH010)
ATP: a Datacenter Approximate Transmission Protocol
Many datacenter applications such as machine learning and streaming systems
do not need the complete set of data to perform their computation. Current
approximate applications in datacenters run on a reliable network layer like
TCP. To improve performance, they either let sender select a subset of data and
transmit them to the receiver or transmit all the data and let receiver drop
some of them. These approaches are network oblivious and unnecessarily transmit
more data, affecting both application runtime and network bandwidth usage. On
the other hand, running approximate application on a lossy network with UDP
cannot guarantee the accuracy of application computation. We propose to run
approximate applications on a lossy network and to allow packet loss in a
controlled manner. Specifically, we designed a new network protocol called
Approximate Transmission Protocol, or ATP, for datacenter approximate
applications. ATP opportunistically exploits available network bandwidth as
much as possible, while performing a loss-based rate control algorithm to avoid
bandwidth waste and re-transmission. It also ensures bandwidth fair sharing
across flows and improves accurate applications' performance by leaving more
switch buffer space to accurate flows. We evaluated ATP with both simulation
and real implementation using two macro-benchmarks and two real applications,
Apache Kafka and Flink. Our evaluation results show that ATP reduces
application runtime by 13.9% to 74.6% compared to a TCP-based solution that
drops packets at sender, and it improves accuracy by up to 94.0% compared to
UDP
๋ฐ์ดํฐ ์ผํฐ ๋ด์ ๋ค์ค๊ฒฝ๋ก ์ ์ก์ ์ํ ๋์ ๋ถํ ๊ท ํ ๊ธฐ๋ฒ
ํ์๋
ผ๋ฌธ (์์ฌ)-- ์์ธ๋ํ๊ต ๋ํ์ : ๊ณต๊ณผ๋ํ ์ปดํจํฐ๊ณตํ๋ถ, 2019. 2. ๊ถํ๊ฒฝ .Various applications require the data center networks to carry their traffic efficiently. The data center networks usually have a hierarchical topology and exhibit distinct traffic patterns, which is different from the traditional Internet. These features have driven the data center networks to reduce the flow completion time (FCT) and to achieve high throughput. One of the possible solutions is balancing network loads across multiple paths by leveraging transport mechanisms like Equal-Cost MultiPath (ECMP) routing. ECMP allows flows to exploit multiple paths by hashing the metadata of the flows. However, due to the random nature of hash functions, ECMP often distributes the traffic unevenly, which makes it hard to utilize the links' full capacity. Thus, we propose an adaptive load balancing mechanism for multiple paths in data centers, called MaxPass, to complement ECMP. A sender adaptively selects and dynamically changes multiple paths depending on the current network status like congestion. To monitor the network status, the corresponding receiver transmits a probe packet periodically to the senderits loss indicates a traffic congestion. We implemented MaxPass using commodity switches and carry out the quantitative analysis on the ns-2 simulator to show that MaxPass can improve the FCT and the throughput.๋ฐ์ดํฐ ์ผํฐ ๋ด์์ ๋์ํ๋ ๋ค์ํ ์ดํ๋ฆฌ์ผ์ด์
๋ค์ ๋คํธ์ํฌ ํธ๋ํฝ์ ๋ณด๋ค ํจ์จ์ ์ผ๋ก ์ฌ์ฉํ ๊ฒ์ ์๊ตฌํ๋ค. ๋ฐ์ดํฐ ์ผํฐ ๋คํธ์ํฌ๋ ๊ธฐ์กด์ ์ธํฐ๋ท๊ณผ๋ ๋ค๋ฅธ ๋ค์ค ๋ฃจํธ ๊ณ์ธต์ ํ ํด๋ก์ง๋ก ๊ตฌ์ฑ๋์ด ์์ผ๋ฉฐ, ๋ค์ํ ํธ๋ํฝ ํจํด์ ๊ฐ์ง๊ณ ์๋ค. ์ด๋ฌํ ํน์ง์ผ๋ก ์ธํด ๋ฐ์ดํฐ ์ผํฐ ๋คํธ์ํฌ๋ ์งง์ ํ๋ก์ฐ ์ฒ๋ฆฌ ์๋ฃ ์๊ฐ (Flow Completion Time)๊ณผ ๋์ ์ฒ๋ฆฌ๋ (Throughput)์ ์๊ตฌํ๋ค. ๋ฐ์ดํฐ ์ผํฐ๊ฐ ์๊ตฌํ๋ ์กฐ๊ฑด๋ค์ ๋ง์กฑ์ํค๊ธฐ ์ํ ๋ฐฉ๋ฒ ์ค ํ๋๋ก๋ ๋ฑ๊ฐ ๋ค์ค ๊ฒฝ๋ก (Equal-Cost Multi-Path)์ ๊ฐ์ ๋ผ์ฐํ
๊ธฐ๋ฒ์ ํ์ฉํ์ฌ ๋คํธ์ํฌ ๋ถํ๋ฅผ ์๋ก ๋ค๋ฅธ ๋งํฌ์ ๋ถ์ฐ์ํค๋ ๊ฒ์ด ์๋ค. ๋ฑ๊ฐ ๋ค์ค ๊ฒฝ๋ก ๋ผ์ฐํ
๊ธฐ๋ฒ์ ํ๋ก์ฐ์ ๋ฉํ ๋ฐ์ดํฐ๋ฅผ ํด์ฑํ์ฌ ํ๋ก์ฐ๊ฐ ์ฌ๋ฌ ๊ฒฝ๋ก๋ฅผ ์ด์ฉํ ์ ์๋๋ก ํ๋ค. ๊ทธ๋ฌ๋ ๋๋คํ ๊ฒฐ๊ณผ๋ฅผ ๋์ถํ๋ ํด์ ํจ์์ ํน์ฑ์ ๋ฐ๋ผ ๋ฑ๊ฐ ๋ค์ค ๊ฒฝ๋ก ๋ผ์ฐํ
๊ธฐ๋ฒ์ ์ข
์ข
ํธ๋ํฝ์ ๊ณ ๋ฅด๊ฒ ๋ถ๋ฐฐํ์ง ๋ชปํจ์ผ๋ก์จ, ๋งํฌ์ ์ ์ฒด ์ฉ๋์ ํ์ฉํ๊ธฐ์๋ ํ๊ณ๊ฐ ์๋ค. ๋ฐ๋ผ์ ๋ณธ ๋
ผ๋ฌธ์์๋ ๋ฐ์ดํฐ ์ผํฐ ๋คํธ์ํฌ ํ๊ฒฝ์ ๋ง๋ ์๋ก์ด ๋ถํ ๊ท ํ ๋ฐฐ๋ถ ๊ธฐ๋ฒ์ธ ๋งฅ์คํจ์ค (MaxPass)๋ฅผ ์ ์ํ๋ค. ๋งฅ์คํจ์ค ๋ด์์ ๋ฐ์ดํฐ ์ก์ ์๋ ํ์ฌ ๋คํธ์ํฌ ์ํ์ ๋ฐ๋ผ ๊ฒฝ๋ก๋ฅผ ๋์ ์ผ๋ก ์ ํํ๊ณ ๋ณ๊ฒฝํ๋ค. ๋ฐ์ดํฐ ์์ ์๋ ํ์ฌ ๋คํธ์ํฌ ์ํ๋ฅผ ํ์
ํ๊ธฐ ์ํด ํ์ ํจํท์ ์ฃผ๊ธฐ์ ์ผ๋ก ์ก์ ์์๊ฒ ๋ณด๋ด๊ณ , ํ์ ํจํท ๋๋ ์ฌ๋ถ์ ๋ฐ๋ผ ํผ์ก๋๋ฅผ ํ์
ํ๋ค. ๋ณธ ๋
ผ๋ฌธ์ ์ค์ ์ค์์น์์ ๋งฅ์คํจ์ค๋ฅผ ๊ตฌํํ์์ผ๋ฉฐ, ns-2 ์๋ฎฌ๋ ์ด์
์ ๊ธฐ๋ฐํ ์คํ์ ํตํด ์ ์ํ ๊ธฐ๋ฒ์ ๊ดํ์ฌ ์ ๋์ ์์น ๋ถ์์ ์ํํ๊ณ , ํ๋ก์ฐ ์ฒ๋ฆฌ ์๋ฃ ์๊ฐ๊ณผ ๋งํฌ ์ฒ๋ฆฌ๋์ ์ฑ๋ฅ ํฅ์์ด ์์์ ๋ณด์ฌ ์ค๋ค.Chapter 1 Introduction 1
Chapter 2 Background 5
2.1 Data Center Network Topology . . . . . . . . . . . . . . . . . 5
2.2 Multipath Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Multipath Transport Protocol . . . . . . . . . . . . . . . . . . . 7
2.4 Credit-based Congestion Control . . . . . . . . . . . . . . . . 8
Chapter 3 MaxPass 10
3.1 Design Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2 Switch Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.3 Path Probing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.4 Adaptive Path Selection . . . . . . . . . . . . . . . . . . . . . . .15
3.5 Feedback Control Algorithm . . . . . . . . . . . . . . . . . . . 15
3.6 Credit Stop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17
Chapter 4 Evaluation 20
4.1 Ns-2 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.1.1 Load Balancing . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.1.2 Throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23
4.1.3 Flow Completion Time (FCT) . . . . . . . . . . . . . . . .25
4.2 Testbed Experiments . . . . . . . . . . . . . . . . . . . . . . . . . 25
Chapter 5 Related Work 28
5.1 Centralized . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .28
5.2 Decentralized/Distributed . . . . . . . . . . . . . . . . . . . . . .30
Chapter 6 Conclusion 32
์ด๋ก 38Maste