32,419 research outputs found

    ATP: a Datacenter Approximate Transmission Protocol

    Full text link
    Many datacenter applications such as machine learning and streaming systems do not need the complete set of data to perform their computation. Current approximate applications in datacenters run on a reliable network layer like TCP. To improve performance, they either let sender select a subset of data and transmit them to the receiver or transmit all the data and let receiver drop some of them. These approaches are network oblivious and unnecessarily transmit more data, affecting both application runtime and network bandwidth usage. On the other hand, running approximate application on a lossy network with UDP cannot guarantee the accuracy of application computation. We propose to run approximate applications on a lossy network and to allow packet loss in a controlled manner. Specifically, we designed a new network protocol called Approximate Transmission Protocol, or ATP, for datacenter approximate applications. ATP opportunistically exploits available network bandwidth as much as possible, while performing a loss-based rate control algorithm to avoid bandwidth waste and re-transmission. It also ensures bandwidth fair sharing across flows and improves accurate applications' performance by leaving more switch buffer space to accurate flows. We evaluated ATP with both simulation and real implementation using two macro-benchmarks and two real applications, Apache Kafka and Flink. Our evaluation results show that ATP reduces application runtime by 13.9% to 74.6% compared to a TCP-based solution that drops packets at sender, and it improves accuracy by up to 94.0% compared to UDP

    Datacenter Traffic Control: Understanding Techniques and Trade-offs

    Get PDF
    Datacenters provide cost-effective and flexible access to scalable compute and storage resources necessary for today's cloud computing needs. A typical datacenter is made up of thousands of servers connected with a large network and usually managed by one operator. To provide quality access to the variety of applications and services hosted on datacenters and maximize performance, it deems necessary to use datacenter networks effectively and efficiently. Datacenter traffic is often a mix of several classes with different priorities and requirements. This includes user-generated interactive traffic, traffic with deadlines, and long-running traffic. To this end, custom transport protocols and traffic management techniques have been developed to improve datacenter network performance. In this tutorial paper, we review the general architecture of datacenter networks, various topologies proposed for them, their traffic properties, general traffic control challenges in datacenters and general traffic control objectives. The purpose of this paper is to bring out the important characteristics of traffic control in datacenters and not to survey all existing solutions (as it is virtually impossible due to massive body of existing research). We hope to provide readers with a wide range of options and factors while considering a variety of traffic control mechanisms. We discuss various characteristics of datacenter traffic control including management schemes, transmission control, traffic shaping, prioritization, load balancing, multipathing, and traffic scheduling. Next, we point to several open challenges as well as new and interesting networking paradigms. At the end of this paper, we briefly review inter-datacenter networks that connect geographically dispersed datacenters which have been receiving increasing attention recently and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial

    Reducing Congestion Effects by Multipath Routing in Wireless Networks

    Get PDF
    We propose a solution to improve fairness and increasethroughput in wireless networks with location information.Our approach consists of a multipath routing protocol, BiasedGeographical Routing (BGR), and two congestion controlalgorithms, In-Network Packet Scatter (IPS) and End-to-EndPacket Scatter (EPS), which leverage BGR to avoid the congestedareas of the network. BGR achieves good performancewhile incurring a communication overhead of just 1 byte perdata packet, and has a computational complexity similar togreedy geographic routing. IPS alleviates transient congestion bysplitting traffic immediately before the congested areas. In contrast,EPS alleviates long term congestion by splitting the flow atthe source, and performing rate control. EPS selects the pathsdynamically, and uses a less aggressive congestion controlmechanism on non-greedy paths to improve energy efficiency.Simulation and experimental results show that our solutionachieves its objectives. Extensive ns-2 simulations show that oursolution improves both fairness and throughput as compared tosingle path greedy routing. Our solution reduces the variance ofthroughput across all flows by 35%, reduction which is mainlyachieved by increasing throughput of long-range flows witharound 70%. Furthermore, overall network throughput increasesby approximately 10%. Experimental results on a 50-node testbed are consistent with our simulation results, suggestingthat BGR is effective in practice
    • …
    corecore