484 research outputs found
ATP: a Datacenter Approximate Transmission Protocol
Many datacenter applications such as machine learning and streaming systems
do not need the complete set of data to perform their computation. Current
approximate applications in datacenters run on a reliable network layer like
TCP. To improve performance, they either let sender select a subset of data and
transmit them to the receiver or transmit all the data and let receiver drop
some of them. These approaches are network oblivious and unnecessarily transmit
more data, affecting both application runtime and network bandwidth usage. On
the other hand, running approximate application on a lossy network with UDP
cannot guarantee the accuracy of application computation. We propose to run
approximate applications on a lossy network and to allow packet loss in a
controlled manner. Specifically, we designed a new network protocol called
Approximate Transmission Protocol, or ATP, for datacenter approximate
applications. ATP opportunistically exploits available network bandwidth as
much as possible, while performing a loss-based rate control algorithm to avoid
bandwidth waste and re-transmission. It also ensures bandwidth fair sharing
across flows and improves accurate applications' performance by leaving more
switch buffer space to accurate flows. We evaluated ATP with both simulation
and real implementation using two macro-benchmarks and two real applications,
Apache Kafka and Flink. Our evaluation results show that ATP reduces
application runtime by 13.9% to 74.6% compared to a TCP-based solution that
drops packets at sender, and it improves accuracy by up to 94.0% compared to
UDP
cISP: A Speed-of-Light Internet Service Provider
Low latency is a requirement for a variety of interactive network
applications. The Internet, however, is not optimized for latency. We thus
explore the design of cost-effective wide-area networks that move data over
paths very close to great-circle paths, at speeds very close to the speed of
light in vacuum. Our cISP design augments the Internet's fiber with free-space
wireless connectivity. cISP addresses the fundamental challenge of
simultaneously providing low latency and scalable bandwidth, while accounting
for numerous practical factors ranging from transmission tower availability to
packet queuing. We show that instantiations of cISP across the contiguous
United States and Europe would achieve mean latencies within 5% of that
achievable using great-circle paths at the speed of light, over medium and long
distances. Further, we estimate that the economic value from such networks
would substantially exceed their expense
PABO: Mitigating Congestion via Packet Bounce in Data Center Networks
In today's data center, a diverse mix of throughput-sensitive long flows and
delay-sensitive short flows are commonly presented in shallow-buffered
switches. Long flows could potentially block the transmission of
delay-sensitive short flows, leading to degraded performance. Congestion can
also be caused by the synchronization of multiple TCP connections for short
flows, as typically seen in the partition/aggregate traffic pattern. While
multiple end-to-end transport-layer solutions have been proposed, none of them
have tackled the real challenge: reliable transmission in the network. In this
paper, we fill this gap by presenting PABO -- a novel link-layer design that
can mitigate congestion by temporarily bouncing packets to upstream switches.
PABO's design fulfills the following goals: i) providing per-flow based flow
control on the link layer, ii) handling transient congestion without the
intervention of end devices, and iii) gradually back propagating the congestion
signal to the source when the network is not capable to handle the
congestion.Experiment results show that PABO can provide prominent advantage of
mitigating transient congestions and can achieve significant gain on end-to-end
delay
De-ossifying the Internet Transport Layer : A Survey and Future Perspectives
ACKNOWLEDGMENT The authors would like to thank the anonymous reviewers for their useful suggestions and comments.Peer reviewedPublisher PD
Delay Tolerant Networking over the Metropolitan Public Transportation
We discuss MDTN: a delay tolerant application platform built on top of the Public Transportation System (PTS) and able to provide service access while exploiting opportunistic connectivity. Our solution adopts a carrier-based approach where buses act as data collectors for user requests requiring Internet access. Simulations based on real maps and PTS routes with state-of-the-art routing protocols demonstrate that MDTN represents a viable solution for elastic nonreal-time service delivery. Nevertheless, performance indexes of the considered routing policies show that there is no golden rule for optimal performance and a tailored routing strategy is required for each specific case
The growing complexity of content delivery networks: Challenges and implications for the Internet ecosystem
Since the commercialization of the Internet, content and related applications, including video streaming, news, advertisements, and social interaction have moved online. It is broadly recognized that the rise of all of these different types of content (static and dynamic, and increasingly multimedia) has been one of the main forces behind the phenomenal growth of the Internet, and its emergence as essential infrastructure for how individuals across the globe gain access to the content sources they want. To accelerate the delivery of diverse content in the Internet and to provide commercial-grade performance for video delivery and the Web, Content Delivery Networks (CDNs) were introduced. This paper describes the current CDN ecosystem
and the forces that have driven its evolution. We outline the different CDN architectures and consider their relative strengths and weaknesses. Our analysis highlights the role of location, the growing complexity of the CDN ecosystem, and their relationship to and implications for interconnection markets.EC/H2020/679158/EU/Resolving the Tussle in the Internet: Mapping, Architecture, and Policy Making/ResolutioNe
FatPaths: Routing in Supercomputers and Data Centers when Shortest Paths Fall Short
We introduce FatPaths: a simple, generic, and robust routing architecture
that enables state-of-the-art low-diameter topologies such as Slim Fly to
achieve unprecedented performance. FatPaths targets Ethernet stacks in both HPC
supercomputers as well as cloud data centers and clusters. FatPaths exposes and
exploits the rich ("fat") diversity of both minimal and non-minimal paths for
high-performance multi-pathing. Moreover, FatPaths uses a redesigned "purified"
transport layer that removes virtually all TCP performance issues (e.g., the
slow start), and incorporates flowlet switching, a technique used to prevent
packet reordering in TCP networks, to enable very simple and effective load
balancing. Our design enables recent low-diameter topologies to outperform
powerful Clos designs, achieving 15% higher net throughput at 2x lower latency
for comparable cost. FatPaths will significantly accelerate Ethernet clusters
that form more than 50% of the Top500 list and it may become a standard routing
scheme for modern topologies
- …