8 research outputs found

    An Efficient Framework of Congestion Control for Next-Generation Networks

    Get PDF
    The success of the Internet can partly be attributed to the congestion control algorithm in the Transmission Control Protocol (TCP). However, with the tremendous increase in the diversity of networked systems and applications, TCP performance limitations are becoming increasingly problematic and the need for new transport protocol designs has become increasingly important.Prior research has focused on the design of either end-to-end protocols (e.g., CUBIC) that rely on implicit congestion signals such as loss and/or delay or network-based protocols (e.g., XCP) that use precise per-flow feedback from the network. While the former category of schemes haveperformance limitations, the latter are hard to deploy, can introduce high per-packet overhead, and open up new security challenges. This dissertation explores the middle ground between these designs and makes four contributions. First, we study the interplay between performance and feedback in congestion control protocols. We argue that congestion feedback in the form of aggregate load can provide the richness needed to meet the challenges of next-generation networks and applications. Second, we present the design, analysis, and evaluation of an efficient framework for congestion control called Binary Marking Congestion Control (BMCC). BMCC uses aggregate load feedback to achieve efficient and fair bandwidth allocations on high bandwidth-delaynetworks while minimizing packet loss rates and average queue length. BMCC reduces flow completiontimes by up to 4x over TCP and uses only the existing Explicit Congestion Notification bits.Next, we consider the incremental deployment of BMCC. We study the bandwidth sharing properties of BMCC and TCP over different partial deployment scenarios. We then present algorithms for ensuring safe co-existence of BMCC and TCP on the Internet. Finally, we consider the performance of BMCC over Wireless LANs. We show that the time-varying nature of the capacity of a WLAN can lead to significant performance issues for protocols that require capacity estimates for feedback computation. Using a simple model we characterize the capacity of a WLAN and propose the usage of the average service rate experienced by network layer packets as an estimate for capacity. Through extensive evaluation, we show that the resulting estimates provide good performance

    Rationale, Scenarios, and Profiles for the Application of the Internet Protocol Suite (IPS) in Space Operations

    Get PDF
    This greenbook captures some of the current, planned and possible future uses of the Internet Protocol (IP) as part of Space Operations. It attempts to describe how the Internet Protocol is used in specific scenarios. Of primary focus is low-earth-orbit space operations, which is referred to here as the design reference mission (DRM). This is because most of the program experience drawn upon derives from this type of mission. Application profiles are provided. This includes parameter settings programs have proposed for sending IP datagrams over CCSDS links, the minimal subsets and features of the IP protocol suite and applications expected for interoperability between projects, and the configuration, operations and maintenance of these IP functions. Of special interest is capturing the lessons learned from the Constellation Program in this area, since that program included a fairly ambitious use of the Internet Protocol

    Traffic Re-engineering: Extending Resource Pooling Through the Application of Re-feedback

    Get PDF
    Parallelism pervades the Internet, yet efficiently pooling this increasing path diversity has remained elusive. With no holistic solution for resource pooling, each layer of the Internet architecture attempts to balance traffic according to its own needs, potentially at the expense of others. From the edges, traffic is implicitly pooled over multiple paths by retrieving content from different sources. Within the network, traffic is explicitly balanced across multiple links through the use of traffic engineering. This work explores how the current architecture can be realigned to facilitate resource pooling at both network and transport layers, where tension between stakeholders is strongest. The central theme of this thesis is that traffic engineering can be performed more efficiently, flexibly and robustly through the use of re-feedback. A cross-layer architecture is proposed for sharing the responsibility for resource pooling across both hosts and network. Building on this framework, two novel forms of traffic management are evaluated. Efficient pooling of traffic across paths is achieved through the development of an in-network congestion balancer, which can function in the absence of multipath transport. Network and transport mechanisms are then designed and implemented to facilitate path fail-over, greatly improving resilience without requiring receiver side cooperation. These contributions are framed by a longitudinal measurement study which provides evidence for many of the design choices taken. A methodology for scalably recovering flow metrics from passive traces is developed which in turn is systematically applied to over five years of interdomain traffic data. The resulting findings challenge traditional assumptions on the preponderance of congestion control on resource sharing, with over half of all traffic being constrained by limits other than network capacity. All of the above represent concerted attempts to rethink and reassert traffic engineering in an Internet where competing solutions for resource pooling proliferate. By delegating responsibilities currently overloading the routing architecture towards hosts and re-engineering traffic management around the core strengths of the network, the proposed architectural changes allow the tussle surrounding resource pooling to be drawn out without compromising the scalability and evolvability of the Internet

    TCP Performance in Heterogeneous Wireless Networks

    Get PDF
    The TCP protocol is used by most Internet applications today, including the recent mobile wireless terminals that use TCP for their World-Wide Web, E-mail and other traffic. The recent wireless network technologies, such as GPRS, are known to cause delay spikes in packet transfer. This causes unnecessary TCP retransmission timeouts. This dissertation proposes a mechanism, Forward RTO-Recovery (F-RTO) for detecting the unnecessary TCP retransmission timeouts and thus allow TCP to take appropriate follow-up actions. We analyze a Linux F-RTO implementation in various network scenarios and investigate different alternatives to the basic algorithm. The second part of this dissertation is focused on quickly adapting the TCP's transmission rate when the underlying link characteristics change suddenly. This can happen, for example, due to vertical hand-offs between GPRS and WLAN wireless technologies. We investigate the Quick-Start algorithm that, in collaboration with the network routers, aims to quickly probe the available bandwidth on a network path, and allow TCP's congestion control algorithms to use that information. By extensive simulations we study the different router algorithms and parameters for Quick-Start, and discuss the challenges Quick-Start faces in the current Internet. We also study the performance of Quick-Start when applied to vertical hand-offs between different wireless link technologies.Suurin osa Internet-sovelluksista käyttää TCP-protokollaa turvatakseen luotettavan tiedonvaihdon. Tällaisia sovelluksia ovat esimerkiksi WWW, sähköposti, ja monet pikaviestiohjelmat. TCP-protokollan pääpiirteet on suunniteltu 1970- ja 1980-luvulla, jolloin päätelaitteita ja sovelluksia oli huomattavasti nykyistä vähemmän ja yhteydet pohjautuivat kiinteiden kommunikaatiolinkkien käyttöön. Langattomien päätelaitteiden yleistyessä on huomattu, että TCP-protokollan suorituskyky ei aina ole hyväksyttävällä tasolla, koska monet sen piirteistä on alunperin suunniteltu erilaisessa käyttöympäristössä. Väitöstyö perehtyy langattoman linkin aiheuttamien vaikeasti ennustettavien viiveiden vaikutukseen TCP:n suorituskyvylle. Tällainen käyttäytyminen on ominaista esimerkiksi nykyisin laajalti matkapuhelimissa käytetylle GPRS-teknologialle. Yllättävät viiveet datansiirrossa aiheuttavat TCP:n uudelleenlähetysajastimen tarpeettoman laukeamisen. Tämä aiheuttaa useiden pakettien turhan uudelleenlähetyksen ja vaikeuttaa TCP:n ruuhkanvalvonta-algoritmien toimintaa. Väitöstyössä ehdotetaan F-RTO -nimistä parannusta TCP:n uudelleenlähetysalgoritmeihin, joka pyrkii havaitsemaan turhat uudelleenlähetykset ja välttämään edellä mainitut ongelmat tällaisissa tilanteissa. Väitöstyö analysoi F-RTO:n suorituskykyä erilaisissa kommunikaatioskenaarioissa ja tutkii erilaisia variaatioita perusalgoritmiin. Lisäksi väitöskirjassa tutkitaan TCP:n lähetysnopeuden pikaista sopeuttamista vallitseville siirto-olosuhteille. Normaalisti TCP tarvitsee huomattavan ajan löytääkseen oikean siirtonopeuden yhteyden alussa, mikäli siirtolinkki on erityisen nopea ja siirtoviiveet verraten pitkiä. Tämä on tilanne uusimmissa langattomissa kommunikaatioteknologioissa. Samankaltainen ongelma esiintyy myös, mikäli TCP-yhteys vaihtaa käyttämäänsä siirtoteknologiaa kesken yhteyden esimerkiksi liikkuvuuden seurauksena. Tämä voi tapahtua uusimmissa päätelaitteissa, jotka tukevat useita erityyppisiä radioteknologioita, kuten WLAN ja GPRS. Väitöskirjassa tutkitaan Quick-Start - nimistä mekanismia, joka nopeuttaa huomattavasti TCP:n sopeutumisnopeutta edellä mainitun kaltaisissa tilanteissa. Työssä tarkastellaan erilaisia algoritmeja Quick-Startin käyttöön ja analysoidaan simulointien avulla algoritmien toimintaa erilaisissa ympäristöissä. Väitöstyössä esitetyillä tuloksilla Internet-kommunikaation suorituskykyä ja käytettävyyttä langattomilla laitteilla voidaan parantaa huomattavasti

    High Performance Network Evaluation and Testing

    Get PDF

    Enhancing programmability for adaptive resource management in next generation data centre networks

    Get PDF
    Recently, Data Centre (DC) infrastructures have been growing rapidly to support a wide range of emerging services, and provide the underlying connectivity and compute resources that facilitate the "*-as-a-Service" model. This has led to the deployment of a multitude of services multiplexed over few, very large-scale centralised infrastructures. In order to cope with the ebb and flow of users, services and traffic, infrastructures have been provisioned for peak-demand resulting in the average utilisation of resources to be low. This overprovisionning has been further motivated by the complexity in predicting traffic demands over diverse timescales and the stringent economic impact of outages. At the same time, the emergence of Software Defined Networking (SDN), is offering new means to monitor and manage the network infrastructure to address this underutilisation. This dissertation aims to show how measurement-based resource management can improve performance and resource utilisation by adaptively tuning the infrastructure to the changing operating conditions. To achieve this dynamicity, the infrastructure must be able to centrally monitor, notify and react based on the current operating state, from per-packet dynamics to longstanding traffic trends and topological changes. However, the management and orchestration abilities of current SDN realisations is too limiting and must evolve for next generation networks. The current focus has been on logically centralising the routing and forwarding decisions. However, in order to achieve the necessary fine-grained insight, the data plane of the individual device must be programmable to collect and disseminate the metrics of interest. The results of this work demonstrates that a logically centralised controller can dynamically collect and measure network operating metrics to subsequently compute and disseminate fine-tuned environment-specific settings. They show how this approach can prevent TCP throughput incast collapse and improve TCP performance by an order of magnitude for partition-aggregate traffic patterns. Futhermore, the paradigm is generalised to show the benefits for other services widely used in DCs such as, e.g, routing, telemetry, and security
    corecore