363 research outputs found

    Segment Routing: a Comprehensive Survey of Research Activities, Standardization Efforts and Implementation Results

    Full text link
    Fixed and mobile telecom operators, enterprise network operators and cloud providers strive to face the challenging demands coming from the evolution of IP networks (e.g. huge bandwidth requirements, integration of billions of devices and millions of services in the cloud). Proposed in the early 2010s, Segment Routing (SR) architecture helps face these challenging demands, and it is currently being adopted and deployed. SR architecture is based on the concept of source routing and has interesting scalability properties, as it dramatically reduces the amount of state information to be configured in the core nodes to support complex services. SR architecture was first implemented with the MPLS dataplane and then, quite recently, with the IPv6 dataplane (SRv6). IPv6 SR architecture (SRv6) has been extended from the simple steering of packets across nodes to a general network programming approach, making it very suitable for use cases such as Service Function Chaining and Network Function Virtualization. In this paper we present a tutorial and a comprehensive survey on SR technology, analyzing standardization efforts, patents, research activities and implementation results. We start with an introduction on the motivations for Segment Routing and an overview of its evolution and standardization. Then, we provide a tutorial on Segment Routing technology, with a focus on the novel SRv6 solution. We discuss the standardization efforts and the patents providing details on the most important documents and mentioning other ongoing activities. We then thoroughly analyze research activities according to a taxonomy. We have identified 8 main categories during our analysis of the current state of play: Monitoring, Traffic Engineering, Failure Recovery, Centrally Controlled Architectures, Path Encoding, Network Programming, Performance Evaluation and Miscellaneous...Comment: SUBMITTED TO IEEE COMMUNICATIONS SURVEYS & TUTORIAL

    The Closed Resolver Project: Measuring the Deployment of Source Address Validation of Inbound Traffic

    Full text link
    Source Address Validation (SAV) is a standard aimed at discarding packets with spoofed source IP addresses. The absence of SAV for outgoing traffic has been known as a root cause of Distributed Denial-of-Service (DDoS) attacks and received widespread attention. While less obvious, the absence of inbound filtering enables an attacker to appear as an internal host of a network and may reveal valuable information about the network infrastructure. Inbound IP spoofing may amplify other attack vectors such as DNS cache poisoning or the recently discovered NXNSAttack. In this paper, we present the preliminary results of the Closed Resolver Project that aims at mitigating the problem of inbound IP spoofing. We perform the first Internet-wide active measurement study to enumerate networks that filter or do not filter incoming packets by their source address, for both the IPv4 and IPv6 address spaces. To achieve this, we identify closed and open DNS resolvers that accept spoofed requests coming from the outside of their network. The proposed method provides the most complete picture of inbound SAV deployment by network providers. Our measurements cover over 55 % IPv4 and 27 % IPv6 Autonomous Systems (AS) and reveal that the great majority of them are fully or partially vulnerable to inbound spoofing. By identifying dual-stacked DNS resolvers, we additionally show that inbound filtering is less often deployed for IPv6 than it is for IPv4. Overall, we discover 13.9 K IPv6 open resolvers that can be exploited for amplification DDoS attacks - 13 times more than previous work. Furthermore, we enumerate uncover 4.25 M IPv4 and 103 K IPv6 vulnerable closed resolvers that could only be detected thanks to our spoofing technique, and that pose a significant threat when combined with the NXNSAttack.Comment: arXiv admin note: substantial text overlap with arXiv:2002.0044

    Taming Anycast in a Wild Internet

    Get PDF
    Anycast is a popular tool for deploying global, widely available systems, including DNS infrastructure and content delivery networks (CDNs). The optimization of these networks often focuses on the deployment and management of anycast sites. However, such approaches fail to consider one of the primary configurations of a large anycast network: the set of networks that receive anycast announcements at each site (i.e., an announcement configuration). Altering these configurations, even without the deployment of additional sites, can have profound impacts on both anycast site selection and round-trip times. In this study, we explore the operation and optimization of any-cast networks through the lens of deployments that have a large number of upstream service providers. We demonstrate that these many-provider anycast networks exhibit fundamentally different properties when interacting with the Internet, having a greater number of single AS hop paths and reduced dependency on each provider, compared with few-provider networks. We further examine the impact of announcement configuration changes, demonstrating that in nearly 30% of vantage point groups, round-trip time performance can be improved by more than 25%, solely by manipulating which providers receive anycast announcements. Finally, we propose DailyCatch, an empirical measurement methodology for testing and validating announcement configuration changes, and demonstrate its ability to influence user-experienced performance on a global anycast CDN

    CHASING THE UNKNOWN: A PREDICTIVE MODEL TO DEMYSTIFY BGP COMMUNITY SEMANTICS

    Get PDF
    The Border Gateway Protocol (BGP) specifies an optional communities attribute for traffic engineering, route manipulation, remotely-triggered blackholing, and other services. However, communities have neither unifying semantics nor cryptographic protections and often propagate much farther than intended. Consequently, Autonomous System (AS) operators are free to define their own community values. This research is a proof-of-concept for a machine learning approach to prediction of community semantics; it attempts a quantitative measurement of semantic predictability between different AS semantic schemata. Ground-truth community semantics data were collated and manually labeled according to a unified taxonomy of community services. Various classification algorithms, including a feed-forward Multi-Layer Perceptron and a Random Forest, were used as the estimator for a One-vs-All multi-class model and trained according to a feature set engineered from this data. The best model's performance on the test set indicates as much as 89.15% of these semantics can be accurately predicted according to a proposed standard taxonomy of community services. This model was additionally applied to historical BGP data from various route collectors to estimate the taxonomic distribution of communities transiting the control plane.http://archive.org/details/chasingtheunknow1094566047Outstanding ThesisCivilian, CyberCorps - Scholarship For ServiceApproved for public release. distribution is unlimite

    Towards a software defined network based multi-domain architecture for the internet of things

    Get PDF
    The current communication networks are heterogeneous, with a diversity of devices and services that challenge traditional networks, making it difficult to meet quality of service (QoS) requirements. With the advent of software-defined networks (SDN), new tools have emerged to design more flexible networks. SDN offers centralized management for data streams in distributed sensor networks. Thus, the main goal of this dissertation is to investigate a solution that meets the QoS requirements of traffic originating on Internet of Things (IoT) devices. This traffic is transmitted to the Internet in a distributed system with multiple SDN controllers. To achieve the goal, we designed a multi-controller network topology, each managed by its controller. Communication between the domains is done via an SDN traffic domain with the Open Network Operating System (ONOS) controller SDN-IP application. We also emulated a network to test QoS through OpenvSwitch queues. The goal is to create traffic priorities in a network with traditional and simulated IoT devices. According to our tests, we have been able to ensure the SDN inter-domain communication and have proven that our proposal is reactive to a topology failure. In the QoS scenario we have shown that through the insertion of OpenFlow rules, we are able to prioritize traffic and provide guarantees of quality of service. This proves that our proposal is promising for use in scenarios with multiple administrative domains.As redes atuais de comunicação são heterogéneas, com uma diversidade de dispositivos e serviços, que desafiam as redes tradicionais, dificultando a satisfação dos requisitos de qualidade de serviço (QoS). Com o advento das Redes Definidas por Software (SDN), novas ferramentas surgiram para projetar redes mais flexíveis. O SDN oferece uma gestão centralizada para os fluxos de dados em redes distribuídas de sensores. Assim, o principal objetivo desta dissertação é de investigar uma solução que cumpra os requisitos de QoS do tráfego originado em dispositivos de Internet das coisas (IoT). Este tráfego é transmitido para a Internet, num sistema distribuído com múltiplos controladores SDN. Para atingir o objetivo, projetamos uma topologia de rede com múltiplos domínios, cada um gerido pelo seu controlador. A comunicação entre os domínios, é feita através dum domínio de trânsito SDN com a aplicação SDN-IP do controlador Sistema Operativo de Rede Aberta (ONOS). Emulamos também uma rede para testar a QoS através de filas de espera do OpenvSwitch. O objetivo é criar prioridades de tráfego numa rede com dispositivos tradicionais e de IoT simulados. De acordo com os testes realizados, conseguimos garantir a comunicação entre domínios SDN e comprovamos que a nossa proposta é reativa a uma falha na topologia. No cenário do QoS demostramos que, através da inserção de regras OpenFlow, conseguimos priorizar o tráfego e oferecer garantias de qualidade de serviço. Desta forma comprovamos que a nossa proposta é promissora para ser utilizada em cenários com múltiplos domínios administrativos

    Design of a Scalable Path Service for the Internet

    Get PDF
    Despite the world-changing success of the Internet, shortcomings in its routing and forwarding system have become increasingly apparent. One symptom is an escalating tension between users and providers over the control of routing and forwarding of packets: providers understandably want to control use of their infrastructure, and users understandably want paths with sufficient quality-of-service (QoS) to improve the performance of their applications. As a result, users resort to various “hacks” such as sending traffic through intermediate end-systems, and the providers fight back with mechanisms to inspect and block such traffic. To enable users and providers to jointly control routing and forwarding policies, recent research has considered various architectural approaches in which provider- level route determination occurs separately from forwarding. With this separation, provider-level path computation and selection can be provided as a centralized service: users (or their applications) send path queries to a path service to obtain provider- level paths that meet their application-specific QoS requirements. At the same time, providers can control the use of their infrastructure by dictating how packets are forwarded across their network. The separation of routing and forwarding offers many advantages, but also brings a number of challenges such as scalability. In particular, the path service must respond to path queries in a timely manner and periodically collect topology information containing load-dependent (i.e., performance) routing information. We present a new design for a path service that makes use of expensive pre- computations, parallel on-demand computations on performance information, and caching of recently computed paths to achieve scalability. We demonstrate that, us- ing commodity hardware with a modest amount of resources, the path service can respond to path queries with acceptable latency under a realistic workload. The ser- vice can scale to arbitrarily large topologies through parallelism. Finally, we describe how to utilize the path service in the current Internet with existing Internet applica- tions

    Distributed Internet security and measurement

    Get PDF
    The Internet has developed into an important economic, military, academic, and social resource. It is a complex network, comprised of tens of thousands of independently operated networks, called Autonomous Systems (ASes). A significant strength of the Internet\u27s design, one which enabled its rapid growth in terms of users and bandwidth, is that its underlying protocols (such as IP, TCP, and BGP) are distributed. Users and networks alike can attach and detach from the Internet at will, without causing major disruptions to global Internet connectivity. This dissertation shows that the Internet\u27s distributed, and often redundant structure, can be exploited to increase the security of its protocols, particularly BGP (the Internet\u27s interdomain routing protocol). It introduces Pretty Good BGP, an anomaly detection protocol coupled with an automated response that can protect individual networks from BGP attacks. It also presents statistical measurements of the Internet\u27s structure and uses them to create a model of Internet growth. This work could be used, for instance, to test upcoming routing protocols on ensemble of large, Internet-like graphs. Finally, this dissertation shows that while the Internet is designed to be agnostic to political influence, it is actually quite centralized at the country level. With the recent rise in country-level Internet policies, such as nation-wide censorship and warrantless wiretaps, this centralized control could have significant impact on international reachability

    ROVER: a DNS-based method to detect and prevent IP hijacks

    Get PDF
    2013 Fall.Includes bibliographical references.The Border Gateway Protocol (BGP) is critical to the global internet infrastructure. Unfortunately BGP routing was designed with limited regard for security. As a result, IP route hijacking has been observed for more than 16 years. Well known incidents include a 2008 hijack of YouTube, loss of connectivity for Australia in February 2012, and an event that partially crippled Google in November 2012. Concern has been escalating as critical national infrastructure is reliant on a secure foundation for the Internet. Disruptions to military, banking, utilities, industry, and commerce can be catastrophic. In this dissertation we propose ROVER (Route Origin VERification System), a novel and practical solution for detecting and preventing origin and sub-prefix hijacks. ROVER exploits the reverse DNS for storing route origin data and provides a fail-safe, best effort approach to authentication. This approach can be used with a variety of operational models including fully dynamic in-line BGP filtering, periodically updated authenticated route filters, and real-time notifications for network operators. Our thesis is that ROVER systems can be deployed by a small number of institutions in an incremental fashion and still effectively thwart origin and sub-prefix IP hijacking despite non-participation by the majority of Autonomous System owners. We then present research results supporting this statement. We evaluate the effectiveness of ROVER using simulations on an Internet scale topology as well as with tests on real operational systems. Analyses include a study of IP hijack propagation patterns, effectiveness of various deployment models, critical mass requirements, and an examination of ROVER resilience and scalability
    • …
    corecore