4 research outputs found

    TCP ex Machina: Computer-Generated Congestion Control

    Get PDF

    Evaluation infrastructure for mobile distributed applications

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 50-54).Sophisticated applications that run on mobile devices have become commonplace. Within the wide realm of mobile software applications there exists a significant number that make use of networking in some form. Unfortunately, such distributed mobile applications are inherently difficult to evaluate. Conventional evaluations of such distributed applications are limited to small, real-world deployments consisting of, perhaps, a handful of phones. Such tests often do not have the requisite number of users to produce the desired performance. Also, these experiments do not scale and are not repeatable. To address all these issues, we sought to evaluate distributed applications in a virtual environment. Besides being cheaper, such evaluations are reproducible and scale significantly better. This thesis documents our efforts in working towards this goal. We discuss the designs that we iterated through, along with the problems we faced in each of them. We hope these problems will inform future designs that can solve the challenges that we weren't able to solve efficiently.by Anirudh Sivaraman Kaushalram.S.M

    Transport Architectures for an Evolving Internet

    Get PDF
    In the Internet architecture, transport protocols are the glue between an application’s needs and the network’s abilities. But as the Internet has evolved over the last 30 years, the implicit assumptions of these protocols have held less and less well. This can cause poor performance on newer networks—cellular networks, datacenters—and makes it challenging to roll out networking technologies that break markedly with the past. Working with collaborators at MIT, I have built two systems that explore an objective-driven, computer-generated approach to protocol design. My thesis is that making protocols a function of stated assumptions and objectives can improve application performance and free network technologies to evolve. Sprout, a transport protocol designed for videoconferencing over cellular networks, uses probabilistic inference to forecast network congestion in advance. On commercial cellular networks, Sprout gives 2-to-4 times the throughput and 7-to-9 times less delay than Skype, Apple Facetime, and Google Hangouts. This work led to Remy, a tool that programmatically generates protocols for an uncertain multi-agent network. Remy’s computer-generated algorithms can achieve higher performance and greater fairness than some sophisticated human-designed schemes, including ones that put intelligence inside the network. The Remy tool can then be used to probe the difficulty of the congestion control problem itself—how easy is it to “learn” a network protocol to achieve desired goals, given a necessarily imperfect model of the networks where it ultimately will be deployed? We found weak evidence of a tradeoff between the breadth of the operating range of a computer-generated protocol and its performance, but also that a single computer-generated protocol was able to outperform existing schemes over a thousand-fold range of link rates

    End-to-end transmission control by modeling uncertainty about the network state

    No full text
    This paper argues that the bar for the incorporation of a new subnetwork or link technology in the current Internet is much more than the ability to send minimum-sized IP packets: success requires that TCP perform well over any subnetwork. This requirement imposes a number of additional constraints, some hard to meet because TCP’s network model is limited and its overall objective challenging to specify precisely. As a result, network evolution has been hampered and the potential of new subnetwork technologies has not been realized in practice. The poor end-to-end performance of many important subnetworks, such as wide-area cellular networks that zealously hide non-congestive losses and introduce enormous delays as a result, or home broadband networks that suffer from the notorious “bufferbloat ” problem, are symptoms of this more general issue. We propose an alternate architecture for end-to-end resource management and transmission control, in which the endpoints work directly to achieve a specified goal. Each endpoint treats the network as an nondeterministic automaton whose parameters and topology are uncertain. The endpoint maintains a probability distribution on what it thinks the network’s configuration may be. At each moment, the endpoint acts to maximize the expected value of a utility function that is given explicitly. We present preliminary simulation results arguing that the approach is tractable and holds promise
    corecore