158 research outputs found

    CAIR: Using Formal Languages to Study Routing, Leaking, and Interception in BGP

    Full text link
    The Internet routing protocol BGP expresses topological reachability and policy-based decisions simultaneously in path vectors. A complete view on the Internet backbone routing is given by the collection of all valid routes, which is infeasible to obtain due to information hiding of BGP, the lack of omnipresent collection points, and data complexity. Commonly, graph-based data models are used to represent the Internet topology from a given set of BGP routing tables but fall short of explaining policy contexts. As a consequence, routing anomalies such as route leaks and interception attacks cannot be explained with graphs. In this paper, we use formal languages to represent the global routing system in a rigorous model. Our CAIR framework translates BGP announcements into a finite route language that allows for the incremental construction of minimal route automata. CAIR preserves route diversity, is highly efficient, and well-suited to monitor BGP path changes in real-time. We formally derive implementable search patterns for route leaks and interception attacks. In contrast to the state-of-the-art, we can detect these incidents. In practical experiments, we analyze public BGP data over the last seven years

    On the Analysis of the Internet from a Geographic and Economic Perspective via BGP Raw Data

    Get PDF
    The Internet is nowadays an integral part of the everyone's life, and will become even more important for future generations. Proof of that is the exponential growth of the number of people who are introduced to the network through mobile phones and smartphones and are connected 24/7. Most of them rely on the Internet even for common services, such as online personal bank accounts, or even having a videoconference with a colleague living across the ocean. However, there are only a few people who are aware of what happens to their data once sent from their own devices towards the Internet, and an even smaller number -- represented by an elite of researchers -- have an overview of the infrastructure of the real Internet. Researchers have attempted during the last years to discover details about the characteristics of the Internet in order to create a model on which it would be possible to identify and address possible weaknesses of the real network. Despite several efforts in this direction, currently no model is known to represent the Internet effectively, especially due to the lack of data and the excessive coarse granularity applied by the studies done to date. This thesis addresses both issues considering Internet as a graph whose nodes are represented by Autonomous Systems (AS) and connections are represented by logical connections between ASes. In the first instance, this thesis has the objective to provide new algorithms and heuristics for studying the Internet at a level of granularity considerably more relevant to reality, by introducing economic and geographical elements that actually limit the number of possible paths between the various ASes that data can undertake. Based on these heuristics, this thesis also provides an innovative methodology suitable to quantify the completeness of the available data to identify which ASes should be involved in the BGP data collection process as feeders in order to get a complete and real view of the core of the Internet. Although the results of this methodology highlights that current BGP route collectors are not able to obtain data regarding the vast majority of the ASes part of the core of the Internet, the situation can still be improved by creating new services and incentives to attract the ASes identified by the previous methodology and introduce them as feeders of a BGP route collector

    Revisiting Internet Adressing: Back to the Future!

    Get PDF
    IP prefixes undermine three goals of Internet routing: accurate reflection of network-layer reachability, secure routing messages, and effective traffic control. This paper presents Atomic IP (AIP), a simple change to Internet addressing (which in fact reverts to how addressing once worked), that allows Internet routing to achieve these goals

    λBGP:Rethinking BGP programmability

    Get PDF
    BGP has long been the de-facto control plane protocol for inter-network connectivity. Although initially designed to provide best-effort routing between ASes, the evolution of Internet services has created a demand for more complex control functionalities using the protocol. At the heart of this challenge lies the static nature of configuration mechanisms and the limited programmability of existing BGP speakers. Meanwhile, the SDN paradigm has demonstrated that open and generic network control APIs can greatly improve network functionality and seamlessly enable greater flexibility in network management. In this paper, we argue that BGP speaking systems can and should provide an open and rich control and configuration mechanism, in order to address modern era network control requirements. Towards this goal, we present λbgp, a modular and extensible BGP framework written in Haskell. The framework offers an extensible integration model for reactive BGP control that remains backward compatible with existing BGP standards and allows network managers to define route processing policies using a high-level language and to dynamically inject information sources into the path selection logic. Using a high-performance BGP traffic generator, we demonstrate that λbgp offers performance comparable to production BGP speakers, while dynamic AS route processing policies can be written in just a few lines of code

    Improving the accuracy of spoofed traffic inference in inter-domain traffic

    Get PDF
    Ascertaining that a network will forward spoofed traffic usually requires an active probing vantage point in that network, effectively preventing a comprehensive view of this global Internet vulnerability. We argue that broader visibility into the spoofing problem may lie in the capability to infer lack of Source Address Validation (SAV) compliance from large, heavily aggregated Internet traffic data, such as traffic observable at Internet Exchange Points (IXPs). The key idea is to use IXPs as observatories to detect spoofed packets, by leveraging Autonomous System (AS) topology knowledge extracted from Border Gateway Protocol (BGP) data to infer which source addresses should legitimately appear across parts of the IXP switch fabric. In this thesis, we demonstrate that the existing literature does not capture several fundamental challenges to this approach, including noise in BGP data sources, heuristic AS relationship inference, and idiosyncrasies in IXP interconnec- tivity fabrics. We propose Spoofer-IX, a novel methodology to navigate these challenges, leveraging Customer Cone semantics of AS relationships to guide precise classification of inter-domain traffic as In-cone, Out-of-cone ( spoofed ), Unverifiable, Bogon, and Unas- signed. We apply our methodology on extensive data analysis using real traffic data from two distinct IXPs in Brazil, a mid-size and a large-size infrastructure. In the mid-size IXP with more than 200 members, we find an upper bound volume of Out-of-cone traffic to be more than an order of magnitude less than the previous method inferred on the same data, revealing the practical importance of Customer Cone semantics in such analysis. We also found no significant improvement in deployment of SAV in networks using the mid-size IXP between 2017 and 2019. In hopes that our methods and tools generalize to use by other IXPs who want to avoid use of their infrastructure for launching spoofed-source DoS attacks, we explore the feasibility of scaling the system to larger and more diverse IXP infrastructures. To promote this goal, and broad replicability of our results, we make the source code of Spoofer-IX publicly available. This thesis illustrates the subtleties of scientific assessments of operational Internet infrastructure, and the need for a community focus on reproducing and repeating previous methods.A constatação de que uma rede encaminhará tráfego falsificado geralmente requer um ponto de vantagem ativo de medição nessa rede, impedindo efetivamente uma visão abrangente dessa vulnerabilidade global da Internet. Isto posto, argumentamos que uma visibilidade mais ampla do problema de spoofing pode estar na capacidade de inferir a falta de conformidade com as práticas de Source Address Validation (SAV) a partir de dados de tráfego da Internet altamente agregados, como o tráfego observável nos Internet Exchange Points (IXPs). A ideia chave é usar IXPs como observatórios para detectar pacotes falsificados, aproveitando o conhecimento da topologia de sistemas autônomos extraído dos dados do protocolo BGP para inferir quais endereços de origem devem aparecer legitimamente nas comunicações através da infra-estrutura de um IXP. Nesta tese, demonstramos que a literatura existente não captura diversos desafios fundamentais para essa abordagem, incluindo ruído em fontes de dados BGP, inferência heurística de relacionamento de sistemas autônomos e características específicas de interconectividade nas infraestruturas de IXPs. Propomos o Spoofer-IX, uma nova metodologia para superar esses desafios, utilizando a semântica do Customer Cone de relacionamento de sistemas autônomos para guiar com precisão a classificação de tráfego inter-domínio como In-cone, Out-of-cone ( spoofed ), Unverifiable, Bogon, e Unassigned. Aplicamos nossa metodologia em análises extensivas sobre dados reais de tráfego de dois IXPs distintos no Brasil, uma infraestrutura de médio porte e outra de grande porte. No IXP de tamanho médio, com mais de 200 membros, encontramos um limite superior do volume de tráfego Out-of-cone uma ordem de magnitude menor que o método anterior inferiu sob os mesmos dados, revelando a importância prática da semântica do Customer Cone em tal análise. Além disso, não encontramos melhorias significativas na implantação do Source Address Validation (SAV) em redes usando o IXP de tamanho médio entre 2017 e 2019. Na esperança de que nossos métodos e ferramentas sejam aplicáveis para uso por outros IXPs que desejam evitar o uso de sua infraestrutura para iniciar ataques de negação de serviço através de pacotes de origem falsificada, exploramos a viabilidade de escalar o sistema para infraestruturas IXP maiores e mais diversas. Para promover esse objetivo e a ampla replicabilidade de nossos resultados, disponibilizamos publicamente o código fonte do Spoofer-IX. Esta tese ilustra as sutilezas das avaliações científicas da infraestrutura operacional da Internet e a necessidade de um foco da comunidade na reprodução e repetição de métodos anteriores

    Mending Wall: On the Implementation of Censorship in India

    Get PDF
    This paper presents a study of the Internet infrastructure in India from the point of view of censorship. First, we show that the current state of affairs — where each ISP implements its own content filters (nominally as per a governmental blacklist) — results in dramatic differences in the censorship experienced by customers. In practice, a well-informed Indian citizen can escape censorship through a judicious choice of service provider. We then consider the question of whether India might potentially follow the Chinese model and institute a single, government-controlled filter. This would not be difficult, as the Indian Internet is quite centralized already. A few “key” ASes (≈ 1% of Indian ASes) collectively intercept ≈ 95% of paths to the censored sites we sample in our study, and also to all publicly-visible DNS servers. 5, 000 routers spanning these key ASes would suffice to carry out IP or DNS filtering for the entire country; ≈ 70% of these routers belong to only two private ISPs. If the government is willing to employ more powerful measures, such as an IP Prefix Hijacking attack, any one of several key ASes can censor traffic for nearly all Indian users. Finally, we demonstrate that such federated censorship by India would cause substantial collateral damage to non-Indian ASes whose traffic passes through Indian cyberspace (which do not legally come under Indian jurisdiction at all)

    A Look Back at "Security Problems in the TCP/IP Protocol Suite"

    Get PDF
    About fifteen years ago, I wrote a paper on security problems in the TCP/IP protocol suite. In particular, I focused on protocol-level issues, rather than implementation flaws. It is instructive to look back at that paper, to see where my focus and my predictions were accurate, where I was wrong, and where dangers have yet to happen. This is a reprint of the original paper, with added commentary
    • …
    corecore