31 research outputs found

    BGP-XM: BGP eXtended Multipath for Transit Autonomous Systems

    Get PDF
    Multipath interdomain routing has been proposed to enable flexible traffic engineering for transit Autonomos Systems (ASes). Yet, there is a lack of solutions providing maximal path diversity and backwards compatibility at the same time. The BGP-XM (Border Gateway Protocol-eXtended Multipath) extension presented in this paper is a complete and flexible approach to solve many of the limitations of previous BGP multipath solutions. ASes can benefit from multipath capabilities starting with a single upgraded router, and without any coordination with other ASes. BGP-XM defines an algorithm to merge into regular BGP updates information from paths which may even traverse different ASes. This algorithm can be combined with different multipath selection algorithms, such as the K-BESTRO (K-Best Route Optimizer) tunable selection algorithm proposed in this paper. A stability analysis and stable policy guidelines are provided. The performance evaluation of BGP-XM, running over an Internet-like topology, shows that high path diversity can be achieved even for limited deployments of the multipath mechanism. Further results for large-scale deployments reveal that the extension is suitable for large deployment since it shows a low impact in the AS path length and in the routing table size

    A Neural Network Approach to Border Gateway Protocol Peer Failure Detection and Prediction

    Get PDF
    The size and speed of computer networks continue to expand at a rapid pace, as do the corresponding errors, failures, and faults inherent within such extensive networks. This thesis introduces a novel approach to interface Border Gateway Protocol (BGP) computer networks with neural networks to learn the precursor connectivity patterns that emerge prior to a node failure. Details of the design and construction of a framework that utilizes neural networks to learn and monitor BGP connection states as a means of detecting and predicting BGP peer node failure are presented. Moreover, this framework is used to monitor a BGP network and a suite of tests are conducted to establish that this neural network approach as a viable strategy for predicting BGP peer node failure. For all performed experiments both of the proposed neural network architectures succeed in memorizing and utilizing the network connectivity patterns. Lastly, a discussion of this framework\u27s generic design is presented to acknowledge how other types of networks and alternate machine learning techniques can be accommodated with relative ease

    Virtualization and Distribution of the BGP Control Plane

    Get PDF
    The Internet is organized as a collection of networks called Autonomous Systems (ASes). The Border Gateway Protocol (BGP) is the glue that connects these administrative domains. Communication is thus possible between users worldwide and each network is responsible of sharing reachability information to peers through BGP. Protocol extensions are periodically added because the intended use and design of BGP no longer fit the current demands. Scalability concerns make the required internal BGP (iBGP) full mesh difficult to achieve in today's large networks and therefore network operators resort to confederations or Route Reflectors (RRs) to achieve full connectivity. These two options come with a set of flaws of their own such as route diversity loss, persistent routing oscillations, deflections, forwarding loops etc. In this dissertation we present oBGP, a new architecture for the redistribution of external routes inside an AS. Instead of relying on the usual statically configured set of iBGP sessions, we propose to use an overlay of routing instances that are collectively responsible for (I) the exchange of routes with other ASes, (II) the storage of internal and external routes, (III) the storage of the entire routing policy configuration of the AS and (IV) the computation and redistribution of the best routes towards Internet destinations to each client router in the AS

    Proactive techniques for correct and predictable Internet routing

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2006.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 185-193).The Internet is composed of thousands of autonomous, competing networks that exchange reachability information using an interdomain routing protocol. Network operators must continually reconfigure the routing protocols to realize various economic and performance goals. Unfortunately, there is no systematic way to predict how the configuration will affect the behavior of the routing protocol or to determine whether the routing protocol will operate correctly at all. This dissertation develops techniques to reason about the dynamic behavior of Internet routing, based on static analysis of the router configurations, before the protocol ever runs on a live network. Interdomain routing offers each independent network tremendous flexibility in configuring the routing protocols to accomplish various economic and performance tasks. Routing configurations are complex, and writing them is similar to writing a distributed program; the (unavoidable) consequence of configuration complexity is the potential for incorrect and unpredictable behavior. These mistakes and unintended interactions lead to routing faults, which disrupt end-to-end connectivity. Network operators writing configurations make mistakes; they may also specify policies that interact in unexpected ways with policies in other networks.(cont.) To avoid disrupting network connectivity and degrading performance, operators would benefit from being able to determine the effects of configuration changes before deploying them on a live network; unfortunately, the status quo provides them no opportunity to do so. This dissertation develops the techniques to achieve this goal of proactively ensuring correct and predictable Internet routing. The first challenge in guaranteeing correct and predictable behavior from a routing protocol is defining a specification for correct behavior. We identify three important aspects of correctness-path visibility, route validity, and safety-and develop proactive techniques for guaranteeing that these properties hold. Path visibility states that the protocol disseminates information about paths in the topology; route validity says that this information actually corresponds to those paths; safety says that the protocol ultimately converges to a stable outcome, implying that routing updates actually correspond to topological changes. Armed with this correctness specification, we tackle the second challenge: analyzing routing protocol configurations that may be distributed across hundreds of routers.(cont.) We develop techniques to check whether a routing protocol satisfies the correctness specification within a single independently operated network. We find that much of the specification can be checked with static configuration analysis alone. We present examples of real-world routing faults and propose a systematic framework to classify, detect, correct, and prevent them. We describe the design and implementation of rcc ("router configuration checker"), a tool that uses static configuration analysis to enable network operators to debug configurations before deploying them in an operational network. We have used rcc to detect faults in 17 different networks, including several nationwide Internet service providers (ISPs). To date, rcc has been downloaded by over seventy network operators. A critical aspect of guaranteeing correct and predictable Internet routing is ensuring that the interactions of the configurations across multiple networks do not violate the correctness specification. Guaranteeing safety is challenging because each network sets its policies independently, and these policies may conflict. Using a formal model of today's Internet routing protocol, we derive conditions to guarantee that unintended policy interactions will never cause the routing protocol to oscillate.(cont.) This dissertation also takes steps to make Internet routing more predictable. We present algorithms that help network operators predict how a set of distributed router configurations within a single network will affect the flow of traffic through that network. We describe a tool based on these algorithms that exploits the unique characteristics of routing data to reduce computational overhead. Using data from a large ISP, we show that this tool correctly computes BGP routing decisions and has a running time that is acceptable for many tasks, such as traffic engineering and capacity planning.by Nicholas Greer Feamster.Ph.D

    Inter-domain traffic management in and evolving Internet peering eco-system

    Get PDF
    Operators of the Autonomous Systems (ASes) composing the Internet must deal with constant traffic growth, while striving to reduce the overall cost-per-bit and keep an acceptable quality of service. These challenges have motivated ASes to evolve their infrastructure from basic interconnectivity strategies, using a couple transit providers and a few settlement-free peers, to employ geographical scoped transit services (e.g. partial transit) and multiplying their peering efforts. Internet Exchange Points (IXPs), facilities allowing the establishment of sessions to multiple networks using the same infrastructure, have hence become central entities of the Internet. Although the benefits of a diverse interconnection strategy are manifold, it also encumbers the inter-domain Traffic Engineering process and potentially increases the effects of incompatible interests with neighboring ASes. To efficiently manage the inter-domain traffic under such challenges, operators should rely on monitoring systems and computer supported decisions. This thesis explores the IXP-centric inter-domain environment, the managing obstacles arising from it, and proposes mechanisms for operators to tackle them. The thesis is divided in two parts. The first part examines and measures the global characteristics of the inter-domain ecosystem. We characterize several IXPs around the world, comparing them in terms of their number of members and the properties of the traffic they exchange. After highlighting the problems arising from the member overlapping among IXPs, we introduce remote peering, an interconnection service that facilitates the connection to multiple IXPs. We describe this service and measure its adoption in the Internet. In the second part of the thesis, we take the position of the network operators. We detail the challenges surrounding the control of inter-domain traffic, and introduce an operational framework aimed at facilitating its management. Subsequently, we examine methods that peering coordinators and network engineers can use to plan their infrastructure investments, by quantifying the benefits of new interconnections. Finally, we delve into the effects of conflicting business objectives among ASes. These conflicts can result in traffic distributions that violate the (business) interests of one or more ASes. We describe these interest violations, differentiating their impact on the ingress and egress traffic of a single AS. Furthermore, we develop a warning system that operators can use to detect and rank them. We test our warning system using data from two real networks, where we discover a large number of interest violations. We thus stress the need for operators to identify the ones having a larger impact on their network.This work has been supported by IMDEA Networks Institute.Programa Oficial de Doctorado en Ingeniería TelemáticaPresidente: Jordi Domingo-Pascual.- Secretario: Francisco Valera Pintor.- Vocal: Víctor Lópe

    A Logically Centralized Approach for Control and Management of Large Computer Networks

    Get PDF
    Management of large enterprise and Internet Service Provider networks is a complex, error-prone, and costly challenge. It is widely accepted that the key contributors to this complexity are the bundling of control and data forwarding in traditional routers and the use of fully distributed protocols for network control. To address these limitations, the networking research community has been pursuing the vision of simplifying the functional role of a router to its primary task of packet forwarding. This enables centralizing network control at a decision plane where network-wide state can be maintained, and network control can be centrally and consistently enforced. However, scalability and fault-tolerance concerns with physical centralization motivate the need for a more flexible and customizable approach. This dissertation is an attempt at bridging the gap between the extremes of distribution and centralization of network control. We present a logically centralized approach for the design of network decision plane that can be realized by using a set of physically distributed controllers in a network. This approach is aimed at giving network designers the ability to customize the level of control and management centralization according to the scalability, fault-tolerance, and responsiveness requirements of their networks. Our thesis is that logical centralization provides a robust, reliable, and efficient paradigm for management of large networks and we present several contributions to prove this thesis. For network planning, we describe techniques for optimizing the placement of network controllers and provide guidance on the physical design of logically centralized networks. For network operation, algorithms for maintaining dynamic associations between the decision plane and network devices are presented, along with a protocol that allows a set of network controllers to coordinate their decisions, and present a unified interface to the managed network devices. Furthermore, we study the trade-offs in decision plane application design and provide guidance on application state and logic distribution. Finally, we present results of extensive numerical and simulative analysis of the feasibility and performance of our approach. The results show that logical centralization can provide better scalability and fault-tolerance while maintaining performance similarity with traditional distributed approach
    corecore