43 research outputs found

    An Efficient Framework of Congestion Control for Next-Generation Networks

    Get PDF
    The success of the Internet can partly be attributed to the congestion control algorithm in the Transmission Control Protocol (TCP). However, with the tremendous increase in the diversity of networked systems and applications, TCP performance limitations are becoming increasingly problematic and the need for new transport protocol designs has become increasingly important.Prior research has focused on the design of either end-to-end protocols (e.g., CUBIC) that rely on implicit congestion signals such as loss and/or delay or network-based protocols (e.g., XCP) that use precise per-flow feedback from the network. While the former category of schemes haveperformance limitations, the latter are hard to deploy, can introduce high per-packet overhead, and open up new security challenges. This dissertation explores the middle ground between these designs and makes four contributions. First, we study the interplay between performance and feedback in congestion control protocols. We argue that congestion feedback in the form of aggregate load can provide the richness needed to meet the challenges of next-generation networks and applications. Second, we present the design, analysis, and evaluation of an efficient framework for congestion control called Binary Marking Congestion Control (BMCC). BMCC uses aggregate load feedback to achieve efficient and fair bandwidth allocations on high bandwidth-delaynetworks while minimizing packet loss rates and average queue length. BMCC reduces flow completiontimes by up to 4x over TCP and uses only the existing Explicit Congestion Notification bits.Next, we consider the incremental deployment of BMCC. We study the bandwidth sharing properties of BMCC and TCP over different partial deployment scenarios. We then present algorithms for ensuring safe co-existence of BMCC and TCP on the Internet. Finally, we consider the performance of BMCC over Wireless LANs. We show that the time-varying nature of the capacity of a WLAN can lead to significant performance issues for protocols that require capacity estimates for feedback computation. Using a simple model we characterize the capacity of a WLAN and propose the usage of the average service rate experienced by network layer packets as an estimate for capacity. Through extensive evaluation, we show that the resulting estimates provide good performance

    Transport Architectures for an Evolving Internet

    Get PDF
    In the Internet architecture, transport protocols are the glue between an application’s needs and the network’s abilities. But as the Internet has evolved over the last 30 years, the implicit assumptions of these protocols have held less and less well. This can cause poor performance on newer networks—cellular networks, datacenters—and makes it challenging to roll out networking technologies that break markedly with the past. Working with collaborators at MIT, I have built two systems that explore an objective-driven, computer-generated approach to protocol design. My thesis is that making protocols a function of stated assumptions and objectives can improve application performance and free network technologies to evolve. Sprout, a transport protocol designed for videoconferencing over cellular networks, uses probabilistic inference to forecast network congestion in advance. On commercial cellular networks, Sprout gives 2-to-4 times the throughput and 7-to-9 times less delay than Skype, Apple Facetime, and Google Hangouts. This work led to Remy, a tool that programmatically generates protocols for an uncertain multi-agent network. Remy’s computer-generated algorithms can achieve higher performance and greater fairness than some sophisticated human-designed schemes, including ones that put intelligence inside the network. The Remy tool can then be used to probe the difficulty of the congestion control problem itself—how easy is it to “learn” a network protocol to achieve desired goals, given a necessarily imperfect model of the networks where it ultimately will be deployed? We found weak evidence of a tradeoff between the breadth of the operating range of a computer-generated protocol and its performance, but also that a single computer-generated protocol was able to outperform existing schemes over a thousand-fold range of link rates

    JetMax: Scalable Max-Min Congestion Control for High-Speed Heterogeneous Networks

    Full text link

    Network and Server Resource Management Strategies for Data Centre Infrastructures: A Survey

    Get PDF
    The advent of virtualisation and the increasing demand for outsourced, elastic compute charged on a pay-as-you-use basis has stimulated the development of large-scale Cloud Data Centres (DCs) housing tens of thousands of computer clusters. Of the signi�cant capital outlay required for building and operating such infrastructures, server and network equipment account for 45% and 15% of the total cost, respectively, making resource utilisation e�ciency paramount in order to increase the operators' Return-on-Investment (RoI). In this paper, we present an extensive survey on the management of server and network resources over virtualised Cloud DC infrastructures, highlighting key concepts and results, and critically discussing their limitations and implications for future research opportunities. We highlight the need for and bene �ts of adaptive resource provisioning that alleviates reliance on static utilisation prediction models and exploits direct measurement of resource utilisation on servers and network nodes. Coupling such distributed measurement with logically-centralised Software De�ned Networking (SDN) principles, we subsequently discuss the challenges and opportunities for converged resource management over converged ICT environments, through unifying control loops to globally orchestrate adaptive and load-sensitive resource provisioning
    corecore