42 research outputs found

    Bandwidth is Political: Reachability in the Public Internet

    Full text link

    Antitrust Analysis for the Internet Upstream Market: A BGP Approach

    Get PDF
    In this paper we study concentration in the European Internet upstream access market. The possibility of measuring market concentration depends on a correct definition of the market itself; however, this is not always possible, since, as it is the case of the Internet industry, very often Antitrust authorities lack reliable pricing and traffic data. This difficulty motivates our paper. We present an alternative approach based on the inference of the Internet Operators interconnection policies using micro-data sourced from their Border Gateway Protocol tables. We assess market concentration following a two step process: firstly we propose a price-independent algorithm for defining both the vertical and geographical relevant market boundaries, then we calculate market concentration indexes using two novel metrics. These assess, for each undertaking, both itsrole in terms of essential network facility and of wholesale market dominance. The results, applied to four leading Internet Exchange Points in London, Amsterdam, Frankfurt and Milan, show that some vertical segments of these markets are highly concentrated, while others are extremely competitive. According to the Merger Guidelines some of the estimated market concentration values would immediately fall within the special attention category.Technology and Industry, Other Topics

    Technologies, routing policies and relationships between autonomous systems in inter-domain routing

    Get PDF
    A deep exploration of the issues related to routing decisions in inter-domain routing is the scope of this thesis, through the analysis of the interconnection structure and the network hierarchy, the examination of the inter-domain routing protocol used to exchange network reachability information with other systems, the examination of the routing decision process between the entities according to their attributes and policies, the study of the topology generators of the AS relationships, reviewing the most interesting proposals in this area, describing why these issues are difficult to solve, and proposing solutions allowing to better understand the routing process and optimally solve the trade-off of implementing a Peering Engagement between two Autonomous Systems, against the extra cost that this solution represent. More specifically this thesis introduces a new scheme for the routing decision in a BGP speaker through a formalization of the routing decision process, and proposes a formulation of a real and exhaustive mathematical model of a Peering Engagement between Autonomous Systems, to be solved as a problem of maximization with an ad-hoc built Decision Support System (XESS) able to find an optimal reduced set of solutions to the proposed problem. -------------------------------------------------------------------------- ABSTRACT [IT] Un’analisi approfondita delle tematiche inerenti le decisioni di routing nel routing interdominio è oggetto di questa tesi, attraverso l’esame della struttura di interconnessione e delle gerarchia del network, lo studio del protocollo utilizzato nel routing interdominio per scambiare le informazioni di reachability con gli altri sistemi, l’analisi del processo decisionale tra le entità coinvolte nello scambio di tali informazioni in accordo con le politiche e gli attributi, lo studio delle topologie sintetiche derivate dallo studio delle relazioni tra gli AS, attraverso i lavori di ricerca in quest’area, la descrizione dei problemi e delle difficoltà, e offrendo un contributo atto a fornire una maggiore comprensione del processo decisionale nel routing interdominio e una soluzione per l’implementazione di un processo di Peering tra Autonomous System. In particolare, questa tesi introduce un nuovo modello per il processo decisionale in uno speaker BGP attraverso la formalizzazione del routing decision process, e propone un modello matematico esaustivo delle meccaniche legate al processo di Peering Engagement tra Autonomous System, da analizzare come problema di massimizzazione e da risolvere con un Decision Support System (XESS) creato per trovare un sottoinsieme ottimo di soluzioni al problema matematico proposto

    Bandwidth is political: Reachability in the public internet

    Get PDF
    The global public Internet faces a growing but little studied threat from the use of intrusive traffic management practices by both wholesale and retail Internet service providers. Unlike research concerned with bandwidth and traffic growth, this study shifts the risk analysis away from capacity issues to focus on performance standards for interconnection and data reachability. The long-term health of the Internet is framed in terms of “data reachability” – the principle that any end-user can reach any part of the Internet without encountering arbitrary actions on the part of a network operator that might block or degrade transmission. Risks to reachability are framed in terms of both systematic traffic management practices and “de-peering,” a more aggressive tactic practised by Tier-1 network operators to resolve disputes or punish rivals. De-peering is examined as an extension of retail network management practices that include the growing use of deep packet inspection (DPI) technology for traffic-shaping. De-peering can also be viewed as a close relative of Net Neutrality, to the extent that both concepts reflect arbitrary practices that interfere with the reliable flow of data packets across the Internet. In jurisdictional terms, however, de-peering poses a qualitatively different set of risks to stakeholders and end-users, as well as qualitatively different challenges to policymakers. It is argued here that risks to data unreachability represent the next stage in debates about the health and sustainability of the global Internet. The study includes a detailed examination of the development of the Internet’s enabling technologies; the evolution of telecommunications regulation in Canada and the United States, and its impact on Internet governance; and an analysis of the role played by commercialization and privatization in the growth of risks to data reachability

    Rethinking Routing and Peering in the era of Vertical Integration of Network Functions

    Get PDF
    Content providers typically control the digital content consumption services and are getting the most revenue by implementing an all-you-can-eat model via subscription or hyper-targeted advertisements. Revamping the existing Internet architecture and design, a vertical integration where a content provider and access ISP will act as unibody in a sugarcane form seems to be the recent trend. As this vertical integration trend is emerging in the ISP market, it is questionable if existing routing architecture will suffice in terms of sustainable economics, peering, and scalability. It is expected that the current routing will need careful modifications and smart innovations to ensure effective and reliable end-to-end packet delivery. This involves new feature developments for handling traffic with reduced latency to tackle routing scalability issues in a more secure way and to offer new services at cheaper costs. Considering the fact that prices of DRAM or TCAM in legacy routers are not necessarily decreasing at the desired pace, cloud computing can be a great solution to manage the increasing computation and memory complexity of routing functions in a centralized manner with optimized expenses. Focusing on the attributes associated with existing routing cost models and by exploring a hybrid approach to SDN, we also compare recent trends in cloud pricing (for both storage and service) to evaluate whether it would be economically beneficial to integrate cloud services with legacy routing for improved cost-efficiency. In terms of peering, using the US as a case study, we show the overlaps between access ISPs and content providers to explore the viability of a future in terms of peering between the new emerging content-dominated sugarcane ISPs and the healthiness of Internet economics. To this end, we introduce meta-peering, a term that encompasses automation efforts related to peering – from identifying a list of ISPs likely to peer, to injecting control-plane rules, to continuous monitoring and notifying any violation – one of the many outcroppings of vertical integration procedure which could be offered to the ISPs as a standalone service

    Provider and peer selection in the evolving internet ecosystem

    Get PDF
    The Internet consists of thousands of autonomous networks connected together to provide end-to-end reachability. Networks of different sizes, and with different functions and business objectives, interact and co-exist in the evolving "Internet Ecosystem". The Internet ecosystem is highly dynamic, experiencing growth (birth of new networks), rewiring (changes in the connectivity of existing networks), as well as deaths (of existing networks). The dynamics of the Internet ecosystem are determined both by external "environmental" factors (such as the state of the global economy or the popularity of new Internet applications) and the complex incentives and objectives of each network. These dynamics have major implications on how the future Internet will look like. How does the Internet evolve? What is the Internet heading towards, in terms of topological, performance, and economic organization? How do given optimization strategies affect the profitability of different networks? How do these strategies affect the Internet in terms of topology, economics, and performance? In this thesis, we take some steps towards answering the above questions using a combination of measurement and modeling approaches. We first study the evolution of the Autonomous System (AS) topology over the last decade. In particular, we classify ASes and inter-AS links according to their business function, and study separately their evolution over the last 10 years. Next, we focus on enterprise customers and content providers at the edge of the Internet, and propose algorithms for a stub network to choose its upstream providers to maximize its utility (either monetary cost, reliability or performance). Third, we develop a model for interdomain network formation, incorporating the effects of economics, geography, and the provider/peer selections strategies of different types of networks. We use this model to examine the "outcome" of these strategies, in terms of the topology, economics and performance of the resulting internetwork. We also investigate the effect of external factors, such as the nature of the interdomain traffic matrix, customer preferences in provider selection, and pricing/cost structures. Finally, we focus on a recent trend due to the increasing amount of traffic flowing from content providers (who generate content), to access providers (who serve end users). This has led to a tussle between content providers and access providers, who have threatened to prioritize certain types of traffic, or charge content providers directly -- strategies that are viewed as violations of "network neutrality". In our work, we evaluate various pricing and connection strategies that access providers can use to remain profitable without violating network neutrality.Ph.D.Committee Chair: Dovrolis, Constantine; Committee Member: Ammar, Mostafa; Committee Member: Feamster, Nick; Committee Member: Willinger, Walter; Committee Member: Zegura, Elle

    Improving the Accuracy of the Internet Cartography

    Get PDF
    As the global Internet expands to satisfy the demands of the ever-increasing connected population, profound changes are occurring in its interconnection structure. The pervasive growth of IXPs and CDNs, two initially independent but synergistic infrastructure sectors, have contributed to the gradual flattening of the Internet’s inter-domain hierarchy with primary routing paths shifting from backbone networks to peripheral peering links. At the same time the IPv6 deployment has taken off due to the depletion of unallocated IPv4 addresses. These fundamental changes in Internet dynamics has obvious implications for network engineering and operations, which can be benefited by accurate topology maps to understand the properties of this critical infrastructure. This thesis presents a set of new measurement techniques and inference algorithms to construct a new type of semantically rich Internet map, and improve the state of the art in Internet cartography. The author first develops a methodology to extract large-scale validation data from the Communities BGP attribute, which encodes rich routing meta-data on BGP messages. Based on this better-informed dataset the author proceeds to analyse popular assumptions about inter-domain routing policies and devise a more accurate model to describe inter-AS business relationships. Accordingly, the thesis proposes a new relationship inference algorithm to accurately capture both simple and complex AS relationships across two dimensions: prefix type, and geographic location. Validation against three sources of ground-truth data reveals that the proposed algorithm achieves a near-perfect accuracy. However, any inference approach is constrained by the inability of the existing topology data sources to provide a complete view of the inter-domain topology. To limit the topology incompleteness problem the author augments traditional BGP data with routing policy data obtained directly from IXPs to discover massive peering meshes which have thus far been largely invisible

    The role of topology and contracts in internet content delivery

    Get PDF
    The Internet depends on economic relationships between ASes (Autonomous Systems), which come in different shapes and sizes - transit, content, and access networks. CDNs (Content delivery networks) are also a pivotal part of the Internet ecosystem and construct their overlays for faster content delivery. With the evolving Internet topology and traffic growth, there is a need to study the cache deployments of CDNs to optimize cost while meeting performance requirements. The bilateral contracts enforce the routing of traffic between neighbouring ASes and are applied recursively: traffic that an AS sends to its neighbour is then controlled by the contracts of that neighbour. The lack of routing flexibility, little control over the quality of the end-to-end path are some of the limitations with the existing bilateral model, and they need to be overcome for achieving end-to-end performance guarantees. Furthermore, due to general reluctance of ASes to disclose their interconnection agreements, inference of inter-AS economic relationships depend on routing and forwarding data from measurements. Since the inferences are imperfect, this necessitates building robust algorithmic strategies to characterize ASes with a significantly higher accuracy. In this thesis, we first study the problem of optimizing multi-AS deployments of CDN caches in the Internet core. Our work is of significant practical relevance since it formalizes the planning process that all CDN operators must follow to reduce the operational cost of their overlay networks, while meeting the performance requirements of their end users. Next, we focus on developing a temporal cone (TC) algorithm that detects PFS (Provider-free ASes). By delivering a significant portion of Internet traffic, PFS is highly relevant to the overall resilience of the Internet. We detect PFS from public datasets of inter-AS economic relationships, utilizing topological statistics (customer cones of ASes) and temporal diversity. Finally, we focus on a multilateral contractual arrangement and develop algorithms for optimizing the cost of transit and access ASes. In particular, we implement Bertsekas auction algorithm for the optimal cost assignment of access ASes to transit ASes. Furthermore, we implement an epsilon-greedy bandit algorithm for optimizing the price of transit ASes and show its learning potential.This work has been supported by IMDEA Networks Institute.Programa Oficial de Doctorado en Ingeniería TelemáticaPresidente: Jordi Domingo Pascual.- Secretario: Francisco Valera Pintor.- Vocal: Pedro Andrés Aranda Gutiérre

    Improving the accuracy of spoofed traffic inference in inter-domain traffic

    Get PDF
    Ascertaining that a network will forward spoofed traffic usually requires an active probing vantage point in that network, effectively preventing a comprehensive view of this global Internet vulnerability. We argue that broader visibility into the spoofing problem may lie in the capability to infer lack of Source Address Validation (SAV) compliance from large, heavily aggregated Internet traffic data, such as traffic observable at Internet Exchange Points (IXPs). The key idea is to use IXPs as observatories to detect spoofed packets, by leveraging Autonomous System (AS) topology knowledge extracted from Border Gateway Protocol (BGP) data to infer which source addresses should legitimately appear across parts of the IXP switch fabric. In this thesis, we demonstrate that the existing literature does not capture several fundamental challenges to this approach, including noise in BGP data sources, heuristic AS relationship inference, and idiosyncrasies in IXP interconnec- tivity fabrics. We propose Spoofer-IX, a novel methodology to navigate these challenges, leveraging Customer Cone semantics of AS relationships to guide precise classification of inter-domain traffic as In-cone, Out-of-cone ( spoofed ), Unverifiable, Bogon, and Unas- signed. We apply our methodology on extensive data analysis using real traffic data from two distinct IXPs in Brazil, a mid-size and a large-size infrastructure. In the mid-size IXP with more than 200 members, we find an upper bound volume of Out-of-cone traffic to be more than an order of magnitude less than the previous method inferred on the same data, revealing the practical importance of Customer Cone semantics in such analysis. We also found no significant improvement in deployment of SAV in networks using the mid-size IXP between 2017 and 2019. In hopes that our methods and tools generalize to use by other IXPs who want to avoid use of their infrastructure for launching spoofed-source DoS attacks, we explore the feasibility of scaling the system to larger and more diverse IXP infrastructures. To promote this goal, and broad replicability of our results, we make the source code of Spoofer-IX publicly available. This thesis illustrates the subtleties of scientific assessments of operational Internet infrastructure, and the need for a community focus on reproducing and repeating previous methods.A constatação de que uma rede encaminhará tráfego falsificado geralmente requer um ponto de vantagem ativo de medição nessa rede, impedindo efetivamente uma visão abrangente dessa vulnerabilidade global da Internet. Isto posto, argumentamos que uma visibilidade mais ampla do problema de spoofing pode estar na capacidade de inferir a falta de conformidade com as práticas de Source Address Validation (SAV) a partir de dados de tráfego da Internet altamente agregados, como o tráfego observável nos Internet Exchange Points (IXPs). A ideia chave é usar IXPs como observatórios para detectar pacotes falsificados, aproveitando o conhecimento da topologia de sistemas autônomos extraído dos dados do protocolo BGP para inferir quais endereços de origem devem aparecer legitimamente nas comunicações através da infra-estrutura de um IXP. Nesta tese, demonstramos que a literatura existente não captura diversos desafios fundamentais para essa abordagem, incluindo ruído em fontes de dados BGP, inferência heurística de relacionamento de sistemas autônomos e características específicas de interconectividade nas infraestruturas de IXPs. Propomos o Spoofer-IX, uma nova metodologia para superar esses desafios, utilizando a semântica do Customer Cone de relacionamento de sistemas autônomos para guiar com precisão a classificação de tráfego inter-domínio como In-cone, Out-of-cone ( spoofed ), Unverifiable, Bogon, e Unassigned. Aplicamos nossa metodologia em análises extensivas sobre dados reais de tráfego de dois IXPs distintos no Brasil, uma infraestrutura de médio porte e outra de grande porte. No IXP de tamanho médio, com mais de 200 membros, encontramos um limite superior do volume de tráfego Out-of-cone uma ordem de magnitude menor que o método anterior inferiu sob os mesmos dados, revelando a importância prática da semântica do Customer Cone em tal análise. Além disso, não encontramos melhorias significativas na implantação do Source Address Validation (SAV) em redes usando o IXP de tamanho médio entre 2017 e 2019. Na esperança de que nossos métodos e ferramentas sejam aplicáveis para uso por outros IXPs que desejam evitar o uso de sua infraestrutura para iniciar ataques de negação de serviço através de pacotes de origem falsificada, exploramos a viabilidade de escalar o sistema para infraestruturas IXP maiores e mais diversas. Para promover esse objetivo e a ampla replicabilidade de nossos resultados, disponibilizamos publicamente o código fonte do Spoofer-IX. Esta tese ilustra as sutilezas das avaliações científicas da infraestrutura operacional da Internet e a necessidade de um foco da comunidade na reprodução e repetição de métodos anteriores
    corecore