25 research outputs found

    Network-provider-independent overlays for resilience and quality of service.

    Get PDF
    PhDOverlay networks are viewed as one of the solutions addressing the inefficiency and slow evolution of the Internet and have been the subject of significant research. Most existing overlays providing resilience and/or Quality of Service (QoS) need cooperation among different network providers, but an inter-trust issue arises and cannot be easily solved. In this thesis, we mainly focus on network-provider-independent overlays and investigate their performance in providing two different types of service. Specifically, this thesis addresses the following problems: Provider-independent overlay architecture: A provider-independent overlay framework named Resilient Overlay for Mission-Critical Applications (ROMCA) is proposed. We elaborate its structure including component composition and functions and also provide several operational examples. Overlay topology construction for providing resilience service: We investigate the topology design problem of provider-independent overlays aiming to provide resilience service. To be more specific, based on the ROMCA framework, we formulate this problem mathematically and prove its NP-hardness. Three heuristics are proposed and extensive simulations are carried out to verify their effectiveness. Application mapping with resilience and QoS guarantees: Assuming application mapping is the targeted service for ROMCA, we formulate this problem as an Integer Linear Program (ILP). Moreover, a simple but effective heuristic is proposed to address this issue in a time-efficient manner. Simulations with both synthetic and real networks prove the superiority of both solutions over existing ones. Substrate topology information availability and the impact of its accuracy on overlay performance: Based on our survey that summarizes the methodologies available for inferring the selective substrate topology formed among a group of nodes through active probing, we find that such information is usually inaccurate and additional mechanisms are needed to secure a better inferred topology. Therefore, we examine the impact of inferred substrate topology accuracy on overlay performance given only inferred substrate topology information

    Measuring The Evolving Internet Ecosystem With Exchange Points

    Get PDF
    The Internet ecosystem comprising of thousands of Autonomous Systems (ASes) now include Internet eXchange Points (IXPs) as another critical component in the infrastructure. Peering plays a significant part in driving the economic growth of ASes and is contributing to a variety of structural changes in the Internet. IXPs are a primary component of this peering ecosystem and are playing an increasing role not only in the topology evolution of the Internet but also inter-domain path routing. In this dissertation we study and analyze the overall affects of peering and IXP infrastructure on the Internet. We observe IXP peering is enabling a quicker flattening of the Internet topology and leading to over-utilization of popular inter-AS links. Indiscriminate peering at these locations is leading to higher endto-end path latencies for ASes peering at an exchange point, an effect magnified at the most popular worldwide IXPs. We first study the effects of recently discovered IXP links on the inter-AS routes using graph based approaches and find that it points towards the changing and flattening landscape in the evolution of the Internet’s topology. We then study more IXP effects by using measurements to investigate the networks benefits of peering. We propose and implement a measurement framework which identifies default paths through IXPs and compares them with alternate paths isolating the IXP hop. Our system is running and recording default and alternate path latencies and made publicly available. We model the probability of an alternate path performing better than a default path through an IXP iii by identifying the underlying factors influencing the end-to end path latency. Our firstof-its-kind modeling study, which uses a combination of statistical and machine learning approaches, shows that path latencies depend on the popularity of the particular IXP, the size of the provider ASes of the networks peering at common locations and the relative position of the IXP hop along the path. An in-depth comparison of end-to-end path latencies reveal a significant percentage of alternate paths outperforming the default route through an IXP. This characteristic of higher path latencies is magnified in the popular continental exchanges as measured by us in a case study looking at the largest regional IXPs. We continue by studying another effect of peering which has numerous applications in overlay routing, Triangle Inequality Violations (TIVs). These TIVs in the Internet delay space are created due to peering and we compare their essential characteristics with overlay paths such as detour routes. They are identified and analyzed from existing measurement datasets but on a scale not carried out earlier. This implementation exhibits the effectiveness of GPUs in analyzing big data sets while the TIVs studied show that the a set of common inter-AS links create these TIVs. This result provides a new insight about the development of TIVs by analyzing a very large data set using GPGPUs. Overall our work presents numerous insights into the inner workings of the Internet’s peering ecosystem. Our measurements show the effects of exchange points on the evolving Internet and exhibits their importance to Internet routing

    Improving end-to-end availability using overlay networks

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2005.Includes bibliographical references (p. 139-150).The end-to-end availability of Internet services is between two and three orders of magnitude worse than other important engineered systems, including the US airline system, the 911 emergency response system, and the US public telephone system. This dissertation explores three systems designed to mask Internet failures, and, through a study of three years of data collected on a 31-site testbed, why these failures happen and how effectively they can be masked. A core aspect of many of the failures that interrupt end-to-end communication is that they fall outside the expected domain of well-behaved network failures. Many traditional techniques cope with link and router failures; as a result, the remaining failures are those caused by software and hardware bugs, misconfiguration, malice, or the inability of current routing systems to cope with persistent congestion.The effects of these failures are exacerbated because Internet services depend upon the proper functioning of many components-wide-area routing, access links, the domain name system, and the servers themselves-and a failure in any of them can prove disastrous to the proper functioning of the service. This dissertation describes three complementary systems to increase Internet availability in the face of such failures. Each system builds upon the idea of an overlay network, a network created dynamically between a group of cooperating Internet hosts. The first two systems, Resilient Overlay Networks (RON) and Multi-homed Overlay Networks (MONET) determine whether the Internet path between two hosts is working on an end-to-end basis. Both systems exploit the considerable redundancy available in the underlying Internet to find failure-disjoint paths between nodes, and forward traffic along a working path. RON is able to avoid 50% of the Internet outages that interrupt communication between a small group of communicating nodes.MONET is more aggressive, combining an overlay network of Web proxies with explicitly engineered redundant links to the Internet to also mask client access link failures. Eighteen months of measurements from a six-site deployment of MONET show that it increases a client's ability to access working Web sites by nearly an order of magnitude. Where RON and MONET combat accidental failures, the Mayday system guards against denial- of-service attacks by surrounding a vulnerable Internet server with a ring of filtering routers. Mayday then uses a set of overlay nodes to act as mediators between the service and its clients, permitting only properly authenticated traffic to reach the server.by David Godbe Andersen.Ph.D

    Future of networking is the future of Big Data, The

    Get PDF
    2019 Summer.Includes bibliographical references.Scientific domains such as Climate Science, High Energy Particle Physics (HEP), Genomics, Biology, and many others are increasingly moving towards data-oriented workflows where each of these communities generates, stores and uses massive datasets that reach into terabytes and petabytes, and projected soon to reach exabytes. These communities are also increasingly moving towards a global collaborative model where scientists routinely exchange a significant amount of data. The sheer volume of data and associated complexities associated with maintaining, transferring, and using them, continue to push the limits of the current technologies in multiple dimensions - storage, analysis, networking, and security. This thesis tackles the networking aspect of big-data science. Networking is the glue that binds all the components of modern scientific workflows, and these communities are becoming increasingly dependent on high-speed, highly reliable networks. The network, as the common layer across big-science communities, provides an ideal place for implementing common services. Big-science applications also need to work closely with the network to ensure optimal usage of resources, intelligent routing of requests, and data. Finally, as more communities move towards data-intensive, connected workflows - adopting a service model where the network provides some of the common services reduces not only application complexity but also the necessity of duplicate implementations. Named Data Networking (NDN) is a new network architecture whose service model aligns better with the needs of these data-oriented applications. NDN's name based paradigm makes it easier to provide intelligent features at the network layer rather than at the application layer. This thesis shows that NDN can push several standard features to the network. This work is the first attempt to apply NDN in the context of large scientific data; in the process, this thesis touches upon scientific data naming, name discovery, real-world deployment of NDN for scientific data, feasibility studies, and the designs of in-network protocols for big-data science

    Improving End-to-End Internet Performance by Detouring

    No full text
    The Internet provides a best-effort service, which gives a robust fault-tolerant network. However, the performance of the paths found in regular Internet routing is suboptimal. As a result, applications rarely achieve all the benefits that the Internet can provide. The problem is made more difficult because the Internet is formed of competing ISPs which have little incentives to reveal information about the performance of Internet paths. As a result, the Internet is sometimes referred as a ‘black-box’. Detouring uses routing overlay networks to find alternative paths (or detour paths) that can improve reliability, latency and bandwidth. Previous work has shown detouring can improve the Internet. However, one important issue remains—how can these detour paths be found without conducting large-scale measurements? In this thesis, we describe practical methods for discovering detour paths to improve specific performance metrics that are scalable to the Internet. Particularly we concentrate our efforts on two metrics, latency and bandwidth, which are arguably the two most important performance metrics for end-user’s applications. Taking advantage of the Internet topology, we show how nodes can learn about segments of Internet paths that can be exploited by detouring leading to reduced path latencies. Next, we investigate bandwidth detouring revealing constructive detour properties and effective mechanisms to detour paths in overlay networks. This leads to Ukairo, our bandwidth detouring platform that is scalable to the Internet and tcpChiryo, which predicts bandwidth in an overlay network through measuring a small portion of the network

    Resilient overlay networks

    Get PDF

    Optimizing Mobile Application Performance through Network Infrastructure Aware Adaptation.

    Full text link
    Encouraged by the fast adoption of mobile devices and the widespread deployment of mobile networks, mobile applications are becoming the preferred “gateways” connecting users to networking services. Although the CPU capability of mobile devices is approaching that of off-the-shelf PCs, the performance of mobile networking applications is still far behind. One of the fundamental reasons is that most mobile applications are unaware of the mobile network specific characteristics, leading to inefficient network and device resource utilization. Thus, in order to improve the user experience for most mobile applications, it is essential to dive into the critical network components along network connections including mobile networks, smartphone platforms, mobile applications, and content partners. We aim to optimize the performance of mobile network applications through network-aware resource adaptation approaches. Our techniques consist of the following four aspects: (i) revealing the fundamental infrastructure characteristics of cellular networks that are distinctive from wireline networks; (ii) isolating the impact of important factors on user perceived performance in mobile network applications; (iii) determining the particular usage patterns of mobile applications; and (iv) improving the performance of mobile applications through network aware adaptations.PhDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/99829/1/qiangxu_1.pd

    Software-Driven and Virtualized Architectures for Scalable 5G Networks

    Full text link
    In this dissertation, we argue that it is essential to rearchitect 4G cellular core networks–sitting between the Internet and the radio access network–to meet the scalability, performance, and flexibility requirements of 5G networks. Today, there is a growing consensus among operators and research community that software-defined networking (SDN), network function virtualization (NFV), and mobile edge computing (MEC) paradigms will be the key ingredients of the next-generation cellular networks. Motivated by these trends, we design and optimize three core network architectures, SoftMoW, SoftBox, and SkyCore, for different network scales, objectives, and conditions. SoftMoW provides global control over nationwide core networks with the ultimate goal of enabling new routing and mobility optimizations. SoftBox attempts to enhance policy enforcement in statewide core networks to enable low-latency, signaling-efficient, and customized services for mobile devices. Sky- Core is aimed at realizing a compact core network for citywide UAV-based radio networks that are going to serve first responders in the future. Network slicing techniques make it possible to deploy these solutions on the same infrastructure in parallel. To better support mobility and provide verifiable security, these architectures can use an addressing scheme that separates network locations and identities with self-certifying, flat and non-aggregatable address components. To benefit the proposed architectures, we designed a high-speed and memory-efficient router, called Caesar, for this type of addressing schemePHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/146130/1/moradi_1.pd

    Edge Computing Platforms and Protocols

    Get PDF
    Cloud computing has created a radical shift in expanding the reach of application usage and has emerged as a de-facto method to provide low-cost and highly scalable computing services to its users. Existing cloud infrastructure is a composition of large-scale networks of datacenters spread across the globe. These datacenters are carefully installed in isolated locations and are heavily managed by cloud providers to ensure reliable performance to its users. In recent years, novel applications, such as Internet-of-Things, augmented-reality, autonomous vehicles etc., have proliferated the Internet. Majority of such applications are known to be time-critical and enforce strict computational delay requirements for acceptable performance. Traditional cloud offloading techniques are inefficient for handling such applications due to the incorporation of additional network delay encountered while uploading pre-requisite data to distant datacenters. Furthermore, as computations involving such applications often rely on sensor data from multiple sources, simultaneous data upload to the cloud also results in significant congestion in the network. Edge computing is a new cloud paradigm which aims to bring existing cloud services and utilities near end users. Also termed edge clouds, the central objective behind this upcoming cloud platform is to reduce the network load on the cloud by utilizing compute resources in the vicinity of users and IoT sensors. Dense geographical deployment of edge clouds in an area not only allows for optimal operation of delay-sensitive applications but also provides support for mobility, context awareness and data aggregation in computations. However, the added functionality of edge clouds comes at the cost of incompatibility with existing cloud infrastructure. For example, while data center servers are closely monitored by the cloud providers to ensure reliability and security, edge servers aim to operate in unmanaged publicly-shared environments. Moreover, several edge cloud approaches aim to incorporate crowdsourced compute resources, such as smartphones, desktops, tablets etc., near the location of end users to support stringent latency demands. The resulting infrastructure is an amalgamation of heterogeneous, resource-constrained and unreliable compute-capable devices that aims to replicate cloud-like performance. This thesis provides a comprehensive collection of novel protocols and platforms for integrating edge computing in the existing cloud infrastructure. At its foundation lies an all-inclusive edge cloud architecture which allows for unification of several co-existing edge cloud approaches in a single logically classified platform. This thesis further addresses several open problems for three core categories of edge computing: hardware, infrastructure and platform. For hardware, this thesis contributes a deployment framework which enables interested cloud providers to effectively identify optimal locations for deploying edge servers in any geographical region. For infrastructure, the thesis proposes several protocols and techniques for efficient task allocation, data management and network utilization in edge clouds with the end-objective of maximizing the operability of the platform as a whole. Finally, the thesis presents a virtualization-dependent platform for application owners to transparently utilize the underlying distributed infrastructure of edge clouds, in conjunction with other co-existing cloud environments, without much management overhead.Pilvilaskenta on aikaansaanut suuren muutoksen sovellusten toiminta-alueessa ja on sen myötÀ muodostunut lÀhes oletusarvoiseksi tavaksi toteuttaa edullisia ja skaalautuvia laskentapalveluita kÀyttÀjille. Olemassaoleva pilvi-infrastruktuuri on kokoelma suuren mittakaavan datakeskuksia ympÀri maailman. Datakeskukset sijaitsevat maantieteellisesti tarkkaan valituissa paikoissa, joista pilvioperaattorit pystyvÀt takaamaan hyvÀn suorituskyvyn kÀyttÀjilleen. Viime vuosina yleistyneet uudet sovellusalat, kuten esineiden Internet (IoT), lisÀtty todellisuus (AR), itseohjautuvat autot, jne., ovat yleistyneet InternetissÀ. Valtaosa edellÀ mainituista sovellusaloista on aikakriittisiÀ, ja ne asettavat laskennalle tiukan viivemarginaalin, jonka toteutuminen on edellytys sovelluksen hyvÀksyttÀvÀlle suorituskyvylle. Perinteiset menetelmÀt delegoida laskentaa pilvipalveluihin ovat kelvottomia aikakriittisissÀ sovelluksissa, sillÀ laskentaan liittyvÀn oheisdatan siirtÀmisestÀ johtuva verkkoviive on liian suuri. Useat edellÀ mainituista uusista sovellusaloista hyödyntÀvÀt sensoridataa, jota kerÀtÀÀn useista eri lÀhteistÀ. Samanaikaiset datayhteydet puolestaan aiheuttavat merkittÀvÀÀ ruuhkaa verkossa. Reunalaskenta on uusi pilviparadigma, jonka tavoitteena on tuoda nykyiset palvelut ja resurssit lÀhemmÀksi loppukÀyttÀjÀÀ. Myös reunapilvenÀ tunnetun paradigman keskeinen tavoite on vÀhentÀÀ pilveen kohdistuvaa verkkoliikennettÀ suorittamalla sovelluksen vaatima laskenta resursseilla, jotka sijaitsevat lÀhempÀnÀ loppukÀyttÀjÀÀ. Reunapilvien tiheÀ maantieteellinen sijoittelu ei ainoastaan auta minimoimaan tiedonsiirtoviivettÀ aikakriittisiÀ sovelluksia varten, vaan tukee myös sovellusten mobiliteettia, kontekstitietoisuutta ja datan aggregointia laskentaa varten. EdellÀ mainitut reunapilven tarjoamat uudet mahdollisuudet eivÀt kuitenkaan ole yhteensopivia nykyisten pilvi-infrastruktuurien kanssa. Datakeskukset toimivat tarkoin valvotuissa ympÀristöissÀ palvelun takaamiseksi, kun taas reunapilvien toiminta-alue on hallinnoimaton ja julkinen. Useat esitykset reunapilven toteutukseen liittyen hyödyntÀvÀt myös kÀyttÀjien laitteiden potentiaalista laskentakapasiteettia, jota tÀnÀ pÀivÀnÀ löytyy runsaasti mm. Àlypuhelimista, kannettavista tietokoneista, tableteista. Reunapilven infrastruktuuri on tÀten haastava yhdistelmÀ heterogeenisiÀ, resurssirajoitettuja, epÀluotettavia, mutta laskentakykyisiÀ laitteita, jotka yhdessÀ pyrkivÀt suorittamaan pilvilaskentaa. TÀmÀ vÀitöstutkimus tarjoaa kokoelman uudentyyppisiÀ protokollia ja alustoja reunalaskennan integroimiseksi osaksi nykyistÀ pilvi-infrastruktuuria. Tutkimuksen pohjana on kokonaisvaltainen reunapilviarkkitehtuuri, joka pyrkii yhdistÀmÀÀn useita rinnakkaisia arkkitehtuuriehdotuksia yhdeksi loogiseksi pilvialustaksi. VÀitöstutkimus ottaa myös kantaa useisiin avoimiin ongelmiin reunalaskennan kolmella osa-alueella: resurssit, infrastruktuuri ja palvelualusta. Resursseihin liittyen tÀmÀ vÀitöstutkimus tarjoaa kÀyttöönottokehyksen, jonka avulla palveluntarjoajat voivat tehokkaasti selvittÀÀ reunapalvelinten optimaaliset maantieteelliset sijoituskohteet. Infrastruktuurin osalta tÀmÀ vÀitöstutkimus esittelee reunapilvessÀ tapahtuvaa tehokasta tehtÀvien allokointia, datan hallinnointia ja verkon hyödyntÀmistÀ varten useita protokollia ja tekniikoita, joiden yhteinen tavoite on maksimoida alustan toiminnallisuus kokonaisuutena. TÀmÀn vÀitöstutkimuksen lopussa kuvataan virtualisointiin pohjautuva alusta, jonka avulla kÀyttÀjÀ voi lÀpinÀkyvÀsti hyödyntÀÀ ympÀröivÀÀ reunapilveÀ perinteisten pilvi-infrastruktuurien rinnalla ilman suurta hallinnollista kuormaa

    Latency-driven replication for globally distributed systems

    Get PDF
    Steen, M.R. van [Promotor]Pierre, G.E.O. [Copromotor
    corecore