94 research outputs found

    Bandwidth is Political: Reachability in the Public Internet

    Full text link

    Optimization of BGP Convergence and Prefix Security in IP/MPLS Networks

    Get PDF
    Multi-Protocol Label Switching-based networks are the backbone of the operation of the Internet, that communicates through the use of the Border Gateway Protocol which connects distinct networks, referred to as Autonomous Systems, together. As the technology matures, so does the challenges caused by the extreme growth rate of the Internet. The amount of BGP prefixes required to facilitate such an increase in connectivity introduces multiple new critical issues, such as with the scalability and the security of the aforementioned Border Gateway Protocol. Illustration of an implementation of an IP/MPLS core transmission network is formed through the introduction of the four main pillars of an Autonomous System: Multi-Protocol Label Switching, Border Gateway Protocol, Open Shortest Path First and the Resource Reservation Protocol. The symbiosis of these technologies is used to introduce the practicalities of operating an IP/MPLS-based ISP network with traffic engineering and fault-resilience at heart. The first research objective of this thesis is to determine whether the deployment of a new BGP feature, which is referred to as BGP Prefix Independent Convergence (PIC), within AS16086 would be a worthwhile endeavour. This BGP extension aims to reduce the convergence delay of BGP Prefixes inside of an IP/MPLS Core Transmission Network, thus improving the networks resilience against faults. Simultaneously, the second research objective was to research the available mechanisms considering the protection of BGP Prefixes, such as with the implementation of the Resource Public Key Infrastructure and the Artemis BGP Monitor for proactive and reactive security of BGP prefixes within AS16086. The future prospective deployment of BGPsec is discussed to form an outlook to the future of IP/MPLS network design. As the trust-based nature of BGP as a protocol has become a distinct vulnerability, thus necessitating the use of various technologies to secure the communications between the Autonomous Systems that form the network to end all networks, the Internet

    Bandwidth is political: Reachability in the public internet

    Get PDF
    The global public Internet faces a growing but little studied threat from the use of intrusive traffic management practices by both wholesale and retail Internet service providers. Unlike research concerned with bandwidth and traffic growth, this study shifts the risk analysis away from capacity issues to focus on performance standards for interconnection and data reachability. The long-term health of the Internet is framed in terms of “data reachability” – the principle that any end-user can reach any part of the Internet without encountering arbitrary actions on the part of a network operator that might block or degrade transmission. Risks to reachability are framed in terms of both systematic traffic management practices and “de-peering,” a more aggressive tactic practised by Tier-1 network operators to resolve disputes or punish rivals. De-peering is examined as an extension of retail network management practices that include the growing use of deep packet inspection (DPI) technology for traffic-shaping. De-peering can also be viewed as a close relative of Net Neutrality, to the extent that both concepts reflect arbitrary practices that interfere with the reliable flow of data packets across the Internet. In jurisdictional terms, however, de-peering poses a qualitatively different set of risks to stakeholders and end-users, as well as qualitatively different challenges to policymakers. It is argued here that risks to data unreachability represent the next stage in debates about the health and sustainability of the global Internet. The study includes a detailed examination of the development of the Internet’s enabling technologies; the evolution of telecommunications regulation in Canada and the United States, and its impact on Internet governance; and an analysis of the role played by commercialization and privatization in the growth of risks to data reachability

    Rethinking Routing and Peering in the era of Vertical Integration of Network Functions

    Get PDF
    Content providers typically control the digital content consumption services and are getting the most revenue by implementing an all-you-can-eat model via subscription or hyper-targeted advertisements. Revamping the existing Internet architecture and design, a vertical integration where a content provider and access ISP will act as unibody in a sugarcane form seems to be the recent trend. As this vertical integration trend is emerging in the ISP market, it is questionable if existing routing architecture will suffice in terms of sustainable economics, peering, and scalability. It is expected that the current routing will need careful modifications and smart innovations to ensure effective and reliable end-to-end packet delivery. This involves new feature developments for handling traffic with reduced latency to tackle routing scalability issues in a more secure way and to offer new services at cheaper costs. Considering the fact that prices of DRAM or TCAM in legacy routers are not necessarily decreasing at the desired pace, cloud computing can be a great solution to manage the increasing computation and memory complexity of routing functions in a centralized manner with optimized expenses. Focusing on the attributes associated with existing routing cost models and by exploring a hybrid approach to SDN, we also compare recent trends in cloud pricing (for both storage and service) to evaluate whether it would be economically beneficial to integrate cloud services with legacy routing for improved cost-efficiency. In terms of peering, using the US as a case study, we show the overlaps between access ISPs and content providers to explore the viability of a future in terms of peering between the new emerging content-dominated sugarcane ISPs and the healthiness of Internet economics. To this end, we introduce meta-peering, a term that encompasses automation efforts related to peering – from identifying a list of ISPs likely to peer, to injecting control-plane rules, to continuous monitoring and notifying any violation – one of the many outcroppings of vertical integration procedure which could be offered to the ISPs as a standalone service

    Network Infrastructures for Highly Distributed Cloud-Computing

    Get PDF
    Software-Defined-Network (SDN) is emerging as a solid opportunity for the Network Service Providers (NSP) to reduce costs while at the same time providing better and/or new services. The possibility to flexibly manage and configure highly-available and scalable network services through data model abstractions and easy-to-consume APIs is attractive and the adoption of such technologies is gaining momentum. At the same time, NSPs are planning to innovate their infrastructures through a process of network softwarisation and programmability. The SDN paradigm aims at improving the design, configuration, maintenance and service provisioning agility of the network through a centralised software control. This can be easily achievable in local area networks, typical of data-centers, where the benefits of having programmable access to the entire network is not restricted by latency between the network devices and the SDN controller which is reasonably located in the same LAN of the data path nodes. In Wide Area Networks (WAN), instead, a centralised control plane limits the speed of responsiveness in reaction to time-constrained network events due to unavoidable latencies caused by physical distances. Moreover, an end-to-end control shall involve the participation of multiple, domain-specific, controllers: access devices, data-center fabrics and backbone networks have very different characteristics and their control-plane could hardly coexist in a single centralised entity, unless of very complex solutions which inevitably lead to software bugs, inconsistent states and performance issues. In recent years, the idea to exploit SDN for WAN infrastructures to connect multiple sites together has spread in both the scientific community and the industry. The former has produced interesting results in terms of framework proposals, complexity and performance analysis for network resource allocation schemes and open-source proof of concept prototypes targeting SDN architectures spanning multiple technological and administrative domains. On the other hand, much of the work still remains confined to the academy mainly because based on pure Openflow prototype implementation, networks emulated on a single general-purpose machine or on simulations proving algorithms effectiveness. The industry has made SDN a reality via closed-source systems, running on single administrative domain networks with little if no diversification of access and backbone devices. In this dissertation we present our contributions to the design and the implementation of SDN architectures for the control plane of WAN infrastructures. In particular, we studied and prototyped two SDN platforms to build a programmable, intent-based, control-plane suitable for the today highly distributed cloud infrastructures. Our main contributions are: (i) an holistic and architectural description of a distributed SDN control-plane for end-end QoS provisioning; we compare the legacy IntServ RSVP protocol with a novel approach for prioritising application-sensitive flows via centralised vantage points. It is based on a peer-to-peer architecture and could so be suitable for the inter-authoritative domains scenario. (ii) An open-source platform based on a two-layer hierarchy of network controllers designed to provision end-to-end connectivity in real networks composed by heterogeneous devices and links within a single authoritative domain. This platform has been integrated in CORD, an open-source project whose goal is to bring data-center economics and cloud agility to the NSP central office infrastructures, combining NFV (Network Function Virtualization), SDN and the elasticity of commodity clouds. Our platform enables the provisioning of connectivity services between multiple CORD sites, up to the customer premises. Thus our system and software contributions in SDN has been combined with a NFV infrastructure for network service automation and orchestration

    A Survey on the Path Computation Element (PCE) Architecture

    Get PDF
    Quality of Service-enabled applications and services rely on Traffic Engineering-based (TE) Label Switched Paths (LSP) established in core networks and controlled by the GMPLS control plane. Path computation process is crucial to achieve the desired TE objective. Its actual effectiveness depends on a number of factors. Mechanisms utilized to update topology and TE information, as well as the latency between path computation and resource reservation, which is typically distributed, may affect path computation efficiency. Moreover, TE visibility is limited in many network scenarios, such as multi-layer, multi-domain and multi-carrier networks, and it may negatively impact resource utilization. The Internet Engineering Task Force (IETF) has promoted the Path Computation Element (PCE) architecture, proposing a dedicated network entity devoted to path computation process. The PCE represents a flexible instrument to overcome visibility and distributed provisioning inefficiencies. Communications between path computation clients (PCC) and PCEs, realized through the PCE Protocol (PCEP), also enable inter-PCE communications offering an attractive way to perform TE-based path computation among cooperating PCEs in multi-layer/domain scenarios, while preserving scalability and confidentiality. This survey presents the state-of-the-art on the PCE architecture for GMPLS-controlled networks carried out by research and standardization community. In this work, packet (i.e., MPLS-TE and MPLS-TP) and wavelength/spectrum (i.e., WSON and SSON) switching capabilities are the considered technological platforms, in which the PCE is shown to achieve a number of evident benefits

    From the edge to the core : towards informed vantage point selection for internet measurement studies

    Get PDF
    Since the early days of the Internet, measurement scientists are trying to keep up with the fast-paced development of the Internet. As the Internet grew organically over time and without build-in measurability, this process requires many workarounds and due diligence. As a result, every measurement study is only as good as the data it relies on. Moreover, data quality is relative to the research question—a data set suitable to analyze one problem may be insufficient for another. This is entirely expected as the Internet is decentralized, i.e., there is no single observation point from which we can assess the complete state of the Internet. Because of that, every measurement study needs specifically selected vantage points, which fit the research question. In this thesis, we present three different vantage points across the Internet topology— from the edge to the Internet core. We discuss their specific features, suitability for different kinds of research questions, and how to work with the corresponding data. The data sets obtained at the presented vantage points allow us to conduct three different measurement studies and shed light on the following aspects: (a) The prevalence of IP source address spoofing at a large European Internet Exchange Point (IXP), (b) the propagation distance of BGP communities, an optional transitive BGP attribute used for traffic engineering, and (c) the impact of the global COVID-19 pandemic on Internet usage behavior at a large Internet Service Provider (ISP) and three IXPs.Seit den frühen Tagen des Internets versuchen Forscher im Bereich Internet Measu- rement, mit der rasanten Entwicklung des des Internets Schritt zu halten. Da das Internet im Laufe der Zeit organisch gewachsen ist und nicht mit Blick auf Messbar- keit entwickelt wurde, erfordert dieser Prozess eine Meg Workarounds und Sorgfalt. Jede Measurement Studie ist nur so gut wie die Daten, auf die sie sich stützt. Und Datenqualität ist relativ zur Forschungsfrage - ein Datensatz, der für die Analyse eines Problems geeiget ist, kann für ein anderes unzureichend sein. Dies ist durchaus zu erwarten, da das Internet dezentralisiert ist, d. h. es gibt keinen einzigen Be- obachtungspunkt, von dem aus wir den gesamten Zustand des Internets beurteilen können. Aus diesem Grund benötigt jede Measurement Studie gezielt ausgewählte Beobachtungspunkte, die zur Forschungsfrage passen. In dieser Arbeit stellen wir drei verschiedene Beobachtungspunkte vor, die sich über die gsamte Internet-Topologie erstrecken— vom Rand bis zum Kern des Internets. Wir diskutieren ihre spezifischen Eigenschaften, ihre Eignung für verschiedene Klas- sen von Forschungsfragen und den Umgang mit den entsprechenden Daten. Die an den vorgestellten Beobachtungspunkten gewonnenen Datensätze ermöglichen uns die Durchführung von drei verschiedenen Measurement Studien und damit die folgenden Aspekte zu beleuchten: (a) Die Prävalenz von IP Source Address Spoofing bei einem großen europäischen Internet Exchange Point (IXP), (b) die Ausbreitungsdistanz von BGP-Communities, ein optionales transitives BGP-Attribut, das Anwendung im Bereich Traffic-Enigneering findet sowie (c) die Auswirkungen der globalen COVID- 19-Pandemie auf das Internet-Nutzungsverhalten an einem großen Internet Service Provider (ISP) und drei IXPs

    Towards a Theory of Scale-Free Graphs: Definition, Properties, and Implications (Extended Version)

    Get PDF
    Although the ``scale-free'' literature is large and growing, it gives neither a precise definition of scale-free graphs nor rigorous proofs of many of their claimed properties. In fact, it is easily shown that the existing theory has many inherent contradictions and verifiably false claims. In this paper, we propose a new, mathematically precise, and structural definition of the extent to which a graph is scale-free, and prove a series of results that recover many of the claimed properties while suggesting the potential for a rich and interesting theory. With this definition, scale-free (or its opposite, scale-rich) is closely related to other structural graph properties such as various notions of self-similarity (or respectively, self-dissimilarity). Scale-free graphs are also shown to be the likely outcome of random construction processes, consistent with the heuristic definitions implicit in existing random graph approaches. Our approach clarifies much of the confusion surrounding the sensational qualitative claims in the scale-free literature, and offers rigorous and quantitative alternatives.Comment: 44 pages, 16 figures. The primary version is to appear in Internet Mathematics (2005

    Schemes for building an efficient all-optical virtual private network.

    Get PDF
    by Tam Scott Kin Lun.Thesis submitted in: October 2005.Thesis (M.Phil.)--Chinese University of Hong Kong, 2006.Includes bibliographical references (leaves 58-64).Abstracts in English and Chinese.Chapter 1. --- Introduction --- p.1Chapter 1.1. --- Optical Networks --- p.1Chapter 1.1.1. --- IP over Optical Networks --- p.1Chapter 1.1.2. --- Challenges in Optical Networks --- p.4Chapter 1.2. --- Virtual Private Networks (VPN) --- p.5Chapter 1.2.1. --- CE Based VPN --- p.6Chapter 1.2.2. --- Network Based VPN --- p.7Chapter 1.2.2.1. --- MPLS Layer 2 VPN --- p.8Chapter 1.2.2.2. --- MPLS Layer 3 VPN --- p.9Chapter 1.2.3. --- Optical VPN --- p.9Chapter 1.2.4. --- Challenges in VPN Technologies --- p.11Chapter 1.3. --- Objective of this Thesis --- p.11Chapter 1.4. --- Outline of this Thesis --- p.12Chapter 2. --- Architecture of an All-Optical VPN --- p.13Chapter 2.1. --- Introduction --- p.13Chapter 2.2. --- Networking Vendor Activities --- p.13Chapter 2.3. --- Service Provider Activities --- p.15Chapter 2.4. --- Standard Bodies Activities --- p.16Chapter 2.5. --- Requirements for All-Optical VPN --- p.17Chapter 2.6. --- Reconfigurability of an All-Optical VPN --- p.19Chapter 2.7. --- Switching Methods in All-Optical VPN --- p.20Chapter 2.8. --- Survivability of an All-Optical VPN --- p.23Chapter 3. --- Maximizing the Utilization Of A Survivable Multi-Ring WDM Network --- p.25Chapter 3.1. --- Introduction --- p.25Chapter 3.2. --- Background --- p.25Chapter 3.3. --- Method --- p.26Chapter 3.3.1. --- Effect on packet based services --- p.28Chapter 3.3.2. --- Effect on optical circuit based services --- p.28Chapter 3.4. --- Simulation results --- p.29Chapter 3.5. --- Chapter Summary --- p.36Chapter 4. --- Design of an All-Optical VPN Processing Engine --- p.37Chapter 4.1. --- Introduction --- p.37Chapter 4.2. --- Concepts of Optical Processors --- p.38Chapter 4.3. --- Design Principles of the All-Optical VPN Processing Engine --- p.40Chapter 4.3.1. --- Systolic System --- p.41Chapter 4.3.2. --- Design Considerations of an Optical Processing Cell --- p.42Chapter 4.3.2.1. --- Mach-Zehnder Structures --- p.43Chapter 4.3.2.2. --- Vertical Cavity Semiconductor Optical Amplifier --- p.43Chapter 4.3.2.3. --- The Optical Processing Cell --- p.44Chapter 4.3.3. --- All-Optical VPN Processing Engine --- p.47Chapter 4.4. --- Design Evaluation --- p.49Chapter 4.5. --- Application Example --- p.50Chapter 4.6. --- Chapter Summary --- p.54Chapter 5. --- Conclusion --- p.55Chapter 5.1. --- Summary of the Thesis --- p.55Chapter 5.2. --- Future Works --- p.56Chapter 6. --- References --- p.5
    • …
    corecore