8 research outputs found

    On the flow-level stability of data networks without congestion control: the case of linear networks and upstream trees

    Full text link
    In this paper, flow models of networks without congestion control are considered. Users generate data transfers according to some Poisson processes and transmit corresponding packet at a fixed rate equal to their access rate until the entire document is received at the destination; some erasure codes are used to make the transmission robust to packet losses. We study the stability of the stochastic process representing the number of active flows in two particular cases: linear networks and upstream trees. For the case of linear networks, we notably use fluid limits and an interesting phenomenon of "time scale separation" occurs. Bounds on the stability region of linear networks are given. For the case of upstream trees, underlying monotonic properties are used. Finally, the asymptotic stability of those processes is analyzed when the access rate of the users decreases to 0. An appropriate scaling is introduced and used to prove that the stability region of those networks is asymptotically maximized

    Re-feedback: freedom with accountability for causing congestion in a connectionless internetwork

    Get PDF
    This dissertation concerns adding resource accountability to a simplex internetwork such as the Internet, with only necessary but sufficient constraint on freedom. That is, both freedom for applications to evolve new innovative behaviours while still responding responsibly to congestion; and freedom for network providers to structure their pricing in any way, including flat pricing. The big idea on which the research is built is a novel feedback arrangement termed ‘re-feedback’. A general form is defined, as well as a specific proposal (re-ECN) to alter the Internet protocol so that self-contained datagrams carry a metric of expected downstream congestion. Congestion is chosen because of its central economic role as the marginal cost of network usage. The aim is to ensure Internet resource allocation can be controlled either by local policies or by market selection (or indeed local lack of any control). The current Internet architecture is designed to only reveal path congestion to end-points, not networks. The collective actions of self-interested consumers and providers should drive Internet resource allocations towards maximisation of total social welfare. But without visibility of a cost-metric, network operators are violating the architecture to improve their customer’s experience. The resulting fight against the architecture is destroying the Internet’s simplicity and ability to evolve. Although accountability with freedom is the goal, the focus is the congestion metric, and whether an incentive system is possible that assures its integrity as it is passed between parties around the system, despite proposed attacks motivated by self-interest and malice. This dissertation defines the protocol and canonical examples of accountability mechanisms. Designs are all derived from carefully motivated principles. The resulting system is evaluated by analysis and simulation against the constraints and principles originally set. The mechanisms are proven to be agnostic to specific transport behaviours, but they could not be made flow-ID-oblivious

    BloomCasting for publish/subscribe networks

    Get PDF
    Publish/subscribe has been proposed as a way of addressing information as the primary named entity in the network. In this thesis, we develop and explore a network architecture based on publish/subscribe primitives, based on our work on PSIRP project. Our work is divided into two areas: rendezvous and Bloomcasting, i.e. fast Bloom filter-based forwarding architecture for source-specific multicast. Taken together these are combined as a publish/subscribe architecture, where publisher and subscriber matching is done by the rendezvous and Bloom filter-based forwarding fabric is used for multicasting the published content. Our work on the inter-domain rendezvous shows that a combination of policy routing at edges and an overlay based on hierarchical distributed hash tables can overcome problems related to incremental deployment while keeping the stretch of queries small and that it can solve some policy related problems that arise from using distributed hash tables in inter-domain setting. Bloom filters can cause false positives. We show that false positives can cause network anomalies, when Bloom filters are used for packet forwarding. We found three such anomalies: packet storms, packet loops, and flow duplication. They can severely disrupt the network infrastructure and be used for denial-of-service attacks against the network or target services. These security and reliability problems can be solved by using the combination of three techniques. Cryptographically computed edge pair-labels ensure that an attacker cannot construct Bloom filter-based path identifiers for chosen path. Varying the Bloom filter parameters locally at each router prevents packet storms and using bit permutations on the Bloom filter locally at each router prevent accidental and malicious loops and flow duplications.Yksi Internetin puutteista on se, ettei ole mitään kaikille sovelluksille yhteistä tapaa nimetä informaatiota. Julkaisija/tilaaja-malli on yksi ehdotus, jolla Internet-arkkitehtuuria voisi muuttaa tämän puutteen korvaamiseksi. Väitöskirjassani kehitän julkaisija/tilaaja-malliin pohjautuvan verkkoarkkitehtuurin, joka pohjautuu työlleni PSRIP-projektissa. Arkkitehtuuri koostuu kohtaamisjärjestelmästä, joka yhdistää julkaisijat ja tilaajat, ja Bloom-suodattimiin pohjautuvasta monen vastaanottajan viestintäkanavasta, jolla julkaistu sisältö toimitetaan tilaajille. Internetin kattavalla kohtaamisjärjestelmällä on korkeat vaatimukset. Tutkin kahta erilaista menetelmää: paikallisiin reitityspolitiikoihin pohjautuvaa järjestelmää ja toinen hajautettuihin hajautustauluihin pohjautuvaa järjestelmää. Ensimmäisen haasteena on skaalautuvuus erityisesti silloin, kun kaikki Internetin verkot eivät osallistu järjestelmän ylläpitoon. Jälkimmäinen on ongelmallinen, sillä siihen pohjautuvat järjestelmät eivät voi taata, mitä reittiä julkaisu ja tilaus -viestit kulkevat järjestelmässä. Näin viesti saattaa kulkea myös julkaisijan tai tilaajan kilpailijan verkon kautta. Ehdotan väitöskirjassani menetelmää, joka yhdistää reunoilla politiikkaan pohjautuvan julkaisu/tilaaja reitityksen ja verkon keskellä yhdistää nämä erilliset saarekkeet hierarkista hajautettua hajautustaulua hyödyntäen. Julkaisujen toimittamiseen tilaajille käytän Bloom-suodattimiin pohjautuvaa järjestelmää. Osoitan väitöskirjassani, että Bloom-suodattimien käyttö pakettien reitittämiseen voi aiheuttaa verkossa merkittäviä vikatilanteita, esimerkiksi pakettiräjähdyksen, silmukan, tai samaan vuohon kuuluvien pakettien moninkertaistumisen. Nämä ongelmat aiheuttavat verkolle turvallisuus- ja luotettavuusongelmia, jotka voidaan ratkaista kolmen tekniikan yhdistelmällä. Ensinnäkin, Bloom-suodattimiin laitettavat polun osia merkitsevät nimet lasketaan kryptografiaa hyödyntäen, ettei hyökkääjä kykene laskemaan Bloom-suodatinta haluamalleen polulle ilman verkon apua. Toisekseen, reitittimet määrittävät Bloom suodatinparametrit paikallisesti siten, ettei pakkettiräjähdyksiä tapahdu. Kolmannekseen, kukin reititin uudelleen järjestelee Bloom-suodattimen bitit varmistaen, ettei suodatin ole enää sama, jos paketti kulkee esimerkiksi silmukan läpi ja palaa samalle takaisin samalle reitittimelle.

    Consistent high performance and flexible congestion control architecture

    Get PDF
    The part of TCP software stack that controls how fast a data sender transfers packets is usually referred as congestion control, because it was originally introduced to avoid network congestion of multiple competing flows. During the recent 30 years of Internet evolution, traditional TCP congestion control architecture, though having a army of specially-engineered implementations and improvements over the original software, suffers increasingly more from surprisingly poor performance in today's complicated network conditions. We argue the traditional TCP congestion control family has little hope of achieving consistent high performance due to a fundamental architectural deficiency: hardwiring packet-level events to control responses. In this thesis, we propose Performance-oriented Congestion Control (PCC), a new congestion control architecture in which each sender continuously observes the connection between its rate control actions and empirically experienced performance, enabling it to use intelligent control algorithms to consistently adopt actions that result in high performance. We first build the above foundation of PCC architecture analytically prove the viability of this new congestion control architecture. Specifically, we show that, controversial to intuition, with certain form of utility function and a theoretically simplified rate control algorithm, selfishly competing senders converge to a fair and stable Nash Equilibrium. With this architectural and theoretical guideline, we then design and implement the first congestion control protocol in PCC family: PCC Allegro. PCC Allegro immediate demonstrates its architectural benefits with significant, often more than 10X, performance gain on a wide spectrum of challenging network conditions. With these very encouraging performance validation, we further advance PCC's architecture on both utilty function framework and the learning rate control algorithm. Taking a principled approach using online learning theory, we designed PCC Vivace with a new strictly socially concave utility function framework and a gradient-ascend based learning rate control algorithm. PCC Vivace significantly improves performance on fast-changing networks, yields better tradeoff in convergence speed and stability and better TCP friendliness comparing to PCC Allegro and other state-of-art new congestion control protocols. Moreover, PCC Vivace's expressive utility function framework can be tuned differently at different competing flows to produce predictable converged throughput ratios for each flow. This opens significant future potential for PCC Vivace in centrally control networking paradigm like Software Defined Networks (SDN). Finally, with all these research advances, we aim to push PCC architecture to production use with a a user-space tunneling proxy and successfully integration with Google's QUIC transport framework

    Toward cheat-proof networking

    No full text
    Over the last three decades the Internet has evolved from a network of a dozen academics to one that spans the globe and is the primary medium of electronic communication. When the two most fundamental Internet protocols---IP and TCP---were designed, they were intended to address the problems of the day: addressing, routing, reliable delivery, and, eventually, congestion. This dissertation studies how these protocols may be augmented to adjust to today's reality and cope with the possibility of cheating- --the overuse of network resources---by network users. We study cheating in the Internet in two contexts. First, we examine strictly limiting the resources a user or entity can consume in the network. Second, we study self- interested, greedy users who want to consume as many resources as possible. The key challenge in both of these contexts is not primarily in designing cheat-proof mechanisms, but in doing so while avoiding unwanted network or architectural overhead. For the former, we develop the notion of Distributed Rate Limiting, which enables a network service provider to cap the aggregate bandwidth consumed by a user at different locations in the network. Distributed Rate Limiters operate at the network layer and aim to emulate the behavior of today's centralized limiters with low inter-limiter communication overhead. For the latter---for developing a transport layer that not only copes with the vicissitudes of network application traffic, but also with the desires and motivations of network users---we develop the notion of Decongestion Control, a congestion control paradigm in which users attempt to maximize their individual throughput in the course of normal operation. In networking canon, dropped packets >represent wasted resources, and thus traditional network congestion control protocols aim to avoid sending at a rate that induces packet loss. We study whether the benefits of a transport layer that embraces---rather than avoids---widespread packet loss and user self interest outweigh the potential loss in efficiency. For both of these systems, we identify numerous potential benefits to and applications for both the network provider and network user alike, and develop a framework in which such systems can subsequently be evaluated. For Distributed Rate Limiting we identify two important metrics---the inter-flow fairness and the rate that a limiter can deliver under shifting traffic patterns ---and evaluate how communication overhead impacts the algorithms we >present with respect to these metrics. For Decongestion Control, we similarly identify and examine the principal challenges---that the protocol must provide >performance not worse than TCP and that its widespread use must not cause congestion collapse
    corecore