169 research outputs found

    Study of BGP Convergence Time

    Get PDF
    Border Gateway Protocol (BGP), a path vector routing protocol, is a widespread exterior gateway protocol (EGP) in the internet. Extensive deployment of the new technologies in internet, protocols need to have continuous improvements in its behavior and operations. New routing technologies conserve a top level of service availability. Hence, due to topological changes, BGP needs to achieve a fast network convergence. Now a days size of the network growing very rapidly. To maintain the high scalability in the network BGP needs to avoid instability. The instability and failures may cause the network into an unstable state, which significantly increases the network convergence time. This paper summarizes the various approaches like BGP policies, instability, and fault detection etc. to improve the convergence time of BGP

    Deliverable DJRA1.2. Solutions and protocols proposal for the network control, management and monitoring in a virtualized network context

    Get PDF
    This deliverable presents several research proposals for the FEDERICA network, in different subjects, such as monitoring, routing, signalling, resource discovery, and isolation. For each topic one or more possible solutions are elaborated, explaining the background, functioning and the implications of the proposed solutions.This deliverable goes further on the research aspects within FEDERICA. First of all the architecture of the control plane for the FEDERICA infrastructure will be defined. Several possibilities could be implemented, using the basic FEDERICA infrastructure as a starting point. The focus on this document is the intra-domain aspects of the control plane and their properties. Also some inter-domain aspects are addressed. The main objective of this deliverable is to lay great stress on creating and implementing the prototype/tool for the FEDERICA slice-oriented control system using the appropriate framework. This deliverable goes deeply into the definition of the containers between entities and their syntax, preparing this tool for the future implementation of any kind of algorithm related to the control plane, for both to apply UPB policies or to configure it by hand. We opt for an open solution despite the real time limitations that we could have (for instance, opening web services connexions or applying fast recovering mechanisms). The application being developed is the central element in the control plane, and additional features must be added to this application. This control plane, from the functionality point of view, is composed by several procedures that provide a reliable application and that include some mechanisms or algorithms to be able to discover and assign resources to the user. To achieve this, several topics must be researched in order to propose new protocols for the virtual infrastructure. The topics and necessary features covered in this document include resource discovery, resource allocation, signalling, routing, isolation and monitoring. All these topics must be researched in order to find a good solution for the FEDERICA network. Some of these algorithms have started to be analyzed and will be expanded in the next deliverable. Current standardization and existing solutions have been investigated in order to find a good solution for FEDERICA. Resource discovery is an important issue within the FEDERICA network, as manual resource discovery is no option, due to scalability requirement. Furthermore, no standardization exists, so knowledge must be obtained from related work. Ideally, the proposed solutions for these topics should not only be adequate specifically for this infrastructure, but could also be applied to other virtualized networks.Postprint (published version

    Future Internet Routing Design for Massive Failures and Attacks

    Get PDF
    Given the high complexity and increasing traffic load of the Internet, geo-correlated challenges caused by large-scale disasters or malicious attacks pose a significant threat to dependable network communications. To understand its characteristics, we propose a critical-region identification mechanism and incorporate its result into a new graph resilience metric, compensated Total Geographical Graph Diversity. Our metric is capable of characterizing and differentiating resiliency levels for different physical topologies. We further analyze the mechanisms attackers could exploit to maximize the damage and demonstrate the effectiveness of a network restoration plan. Based on the geodiversity in topologies, we present the path geodiverse problem and two heuristics to solve it more efficiently compared to the optimal algorithm. We propose the flow geodiverse problem and two optimization formulations to study the tradeoff among cost, end-to-end delay, and path skew with multipath forwarding. We further integrate the solution to above models into our cross-layer resilient protocol stack, ResTP–GeoDivRP. Our protocol stack is prototyped and implemented in the network simulator ns-3 and emulated in our KanREN testbed. By providing multiple GeoPaths, our protocol stack provides better path restoration performance than Multipath TCP

    A Framework to Quantify Network Resilience and Survivability

    Get PDF
    The significance of resilient communication networks in the modern society is well established. Resilience and survivability mechanisms in current networks are limited and domain specific. Subsequently, the evaluation methods are either qualitative assessments or context-specific metrics. There is a need for rigorous quantitative evaluation of network resilience. We propose a service oriented framework to characterize resilience of networks to a number of faults and challenges at any abstraction level. This dissertation presents methods to quantify the operational state and the expected service of the network using functional metrics. We formalize resilience as transitions of the network state in a two-dimensional state space quantifying network characteristics, from which network service performance parameters can be derived. One dimension represents the network as normally operating, partially degraded, or severely degraded. The other dimension represents network service as acceptable, impaired, or unacceptable. Our goal is to initially understand how to characterize network resilience, and ultimately how to guide network design and engineering toward increased resilience. We apply the proposed framework to evaluate the resilience of the various topologies and routing protocols. Furthermore, we present several mechanisms to improve the resilience of the networks to various challenges

    Predictive Analytics Lead to Smarter Self-Organizing Directional Wireless Backbone Networks

    Get PDF
    Directional wireless systems are becoming a cost-effective approach towards providing a high-speed, reliable, broadband connection for the ubiquitous mobile wireless devices in use today. The most common of these systems consists of narrow-beam radio frequency (RF) and free-space-optical (FSO) links, which offer speeds between 100Mbps and 100Gbps while offering bit-error-rates comparable to fixed fiber optic installations. In addition, spatial and spectral efficiencies are accessible with directional wireless systems that cannot be matched with broadcast systems. The added benefits of compact designs permit the installation of directional antennas on-board unmanned autonomous systems (UAS) to provide network availability to regions prone to natural disasters, in maritime situations, and in war-torn countries that lack infrastructure security. In addition, through the use of intelligent network-centric algorithms, a flexible airborne backbone network can be established to dodge the scalability limitations of traditional omnidirectional wireless networks. Assuring end-to-end connectivity and coverage is the main challenge in the design of directional wireless backbone (DWB) networks. Conflating the duality of these objectives with the dynamical nature of the environment in which DWB networks are deployed, in addition to the standardized network metrics such as latency-minimization and throughput maximization, demands a rigorous control process that encompasses all aspects of the system. This includes the mechanical steering of the directional point-to-point link and the monitoring of aggregate network performance (e.g. dropped packets). The inclusion of processes for topology control, mobility management, pointing, acquisition, and tracking of the directional antennas, alongside traditional protocols (e.g. IPv6) provides a rigorous framework for next-generation mobile directional communication networks. This dissertation provides a novel approach to increase reliability in reconfigurable beam-steered directional wireless backbone networks by predicating optimal network reconfigurations wherein the network is modeled as a giant molecule in which the point-to-point links between two UASs are able to grow and retract analogously to the bonds between atoms in a molecule. This cross-disciplinary methodology explores the application of potential energy surfaces and normal mode analysis as an extension to the topology control optimization. Each of these methodologies provides a new and unique ability for predicting unstable configurations of DWB networks through an understanding of second-order principle dynamics inherent within the aggregate configuration of the system. This insight is not available through monitoring individual link performance. Together, the techniques used to model the DWB network through molecular dynamics are referred to as predictive analytics and provide reliable results that lead to smarter self-organizing reconfigurable beam-steered DWB networks. Furthermore, a comprehensive control architecture is proposed that complements traditional network science (e.g. Internet protocol) and the unique design aspects of DWB networks. The distinct ability of a beam-steered DWB network to adjust the direction of its antennas (i.e. reconfigure) in response to degraded effects within the atmosphere or due to an increased separation of nodes, is not incorporated in traditional network processes such re-routing mechanism, and therefore, processes for reconfiguration can be abstracted which both optimize the physical interconnections while maintaining interoperability with existing protocols. This control framework is validated using network metrics for latency and throughput and compared to existing architectures which use only standard re-routing mechanisms. Results are shown that validate both the analogous molecular modeling of a reconfigurable beam-steered directional wireless backbone network and a comprehensive control architecture which coalesces the unique capabilities of reconfiguration and mobility of mobile wireless backbone networks with existing protocols for networks such as IPv6

    Finding and Mitigating Geographic Vulnerabilities in Mission Critical Multi-Layer Networks

    Get PDF
    Title from PDF of title page, viewed on June 20, 2016Dissertation advisor: Cory BeardVitaIncludes bibliographical references (pages 232-257)Thesis(Ph.D.)--School of Computing and Engineering. University of Missouri--Kansas City, 2016In Air Traffic Control (ATC), communications outages may lead to immediate loss of communications or radar contact with aircraft. In the short term, there may be safety related issues as important services including power systems, ATC, or communications for first responders during a disaster may be out of service. Significant financial damage from airline delays and cancellations may occur in the long term. This highlights the different types of impact that may occur after a disaster or other geographic event. The question is How do we evaluate and improve the ability of a mission-critical network to perform its mission during geographically correlated failures? To answer this question, we consider several large and small networks, including a multi-layer ATC Service Oriented Architecture (SOA) network known as SWIM. This research presents a number of tools to analyze and mitigate both long and short term geographic vulnerabilities in mission critical networks. To provide context for the tools, a disaster planning approach is presented that focuses on Resiliency Evaluation, Provisioning Demands, Topology Design, and Mitigation of Vulnerabilities. In the Resilience Evaluation, we propose a novel metric known as the Network Impact Resilience (NIR) metric and a reduced state based algorithm to compute the NIR known as the Self-Pruning Network State Generation (SP-NSG) algorithm. These tools not only evaluate the resiliency of a network with a variety of possible network tests, but they also identify geographic vulnerabilities. Related to the Demand Provisioning and Mitigation of Vulnerabilities, we present methods that focus on provisioning in preparation for rerouting of demands immediately following an event based on Service Level Agreements (SLA) and fast rerouting of demands around geographic vulnerabilities using Multi-Topology Routing (MTR). The Topology Design area focuses on adding nodes to improve topologies to be more resistant to geographic vulnerabilities. Additionally, a set of network performance tools are proposed for use with mission critical networks that can model at least up to 2nd order network delay statistics. The first is an extension of the Queueing Network Analyzer (QNA) to model multi-layer networks (and specifically SOA networks). The second is a network decomposition tool based on Linear Algebraic Queueing Theory (LAQT). This is one of the first extensive uses of LAQT for network modeling. Benefits, results, and limitations of both methods are described.Introduction -- SWIM Network - Air traffic Control example -- Performance analysis of mission critical multi-layer networks -- Evaluation of geographically correlated failures in multi-layer networks -- Provisioning and restoral of mission critical services for disaster resilience -- Topology improvements to avoid high impact geographic events -- Routing of mission critical services during disasters -- Conclusions and future research -- Appendix A. Pub/Sub simulation model description -- Appendix B. ME Random Number Generatio

    Exploiting the power of multiplicity: a holistic survey of network-layer multipath

    Get PDF
    The Internet is inherently a multipath network: For an underlying network with only a single path, connecting various nodes would have been debilitatingly fragile. Unfortunately, traditional Internet technologies have been designed around the restrictive assumption of a single working path between a source and a destination. The lack of native multipath support constrains network performance even as the underlying network is richly connected and has redundant multiple paths. Computer networks can exploit the power of multiplicity, through which a diverse collection of paths is resource pooled as a single resource, to unlock the inherent redundancy of the Internet. This opens up a new vista of opportunities, promising increased throughput (through concurrent usage of multiple paths) and increased reliability and fault tolerance (through the use of multiple paths in backup/redundant arrangements). There are many emerging trends in networking that signify that the Internet's future will be multipath, including the use of multipath technology in data center computing; the ready availability of multiple heterogeneous radio interfaces in wireless (such as Wi-Fi and cellular) in wireless devices; ubiquity of mobile devices that are multihomed with heterogeneous access networks; and the development and standardization of multipath transport protocols such as multipath TCP. The aim of this paper is to provide a comprehensive survey of the literature on network-layer multipath solutions. We will present a detailed investigation of two important design issues, namely, the control plane problem of how to compute and select the routes and the data plane problem of how to split the flow on the computed paths. The main contribution of this paper is a systematic articulation of the main design issues in network-layer multipath routing along with a broad-ranging survey of the vast literature on network-layer multipathing. We also highlight open issues and identify directions for future work

    Technology-related disasters:a survey towards disaster-resilient software defined networks

    Get PDF
    Resilience against disaster scenarios is essential to network operators, not only because of the potential economic impact of a disaster but also because communication networks form the basis of crisis management. COST RECODIS aims at studying measures, rules, techniques and prediction mechanisms for different disaster scenarios. This paper gives an overview of different solutions in the context of technology-related disasters. After a general overview, the paper focuses on resilient Software Defined Networks
    corecore