2,280 research outputs found

    Spare capacity allocation using shared backup path protection for dual link failures

    Get PDF
    This paper extends the spare capacity allocation (SCA) problem from single link failure [1] to dual link failures on mesh-like IP or WDM networks. The SCA problem pre-plans traffic flows with mutually disjoint one working and two backup paths using the shared backup path protection (SBPP) scheme. The aggregated spare provision matrix (SPM) is used to capture the spare capacity sharing for dual link failures. Comparing to a previous work by He and Somani [2], this method has better scalability and flexibility. The SCA problem is formulated in a non-linear integer programming model and partitioned into two sequential linear sub-models: one finds all primary backup paths first, and the other finds all secondary backup paths next. The results on five networks show that the network redundancy using dedicated 1+1+1 is in the range of 313-400%. It drops to 96-181% in 1:1:1 without loss of dual-link resiliency, but with the trade-off of using the complicated share capacity sharing among backup paths. The hybrid 1+1:1 provides intermediate redundancy ratio at 187-310% with a moderate complexity. We also compare the passive/active approaches which consider spare capacity sharing after/during the backup path routing process. The active sharing approaches always achieve lower redundancy values than the passive ones. These reduction percentages are about 12% for 1+1:1 and 25% for 1:1:1 respectively

    Survivability Analysis on Non-Triconnected Optical Networks under Dual-Link Failures

    Get PDF
    Survivability of optical networks is considered among the most critical problems that telecommunications operators need to solve at a reasonable cost. Survivability can be enhanced by increasing the amount of network links and its spare capacity, nevertheless this deploys more resources on the network that will be used only under failure scenarios. In other words, these spare resources do not generate any direct profit to network operators as they are reserved to route only disrupted traffic. In particular, the case of dual link failures on fiber optic cables (i.e., fiber cuts) has recently received much attention as repairing these cables typically requires much time, which then increases the probability of a second failure on another link of the network. In this context, survivability schemes can be used to recover the network from a dual link failure scenario. In this work, we analyze the case of protection and restoration schemes, which are two well-known recovery strategies. The former is simpler to implement as it considers a fixed set of backup paths for all failure scenarios; however, it cannot take into account the spare capacity released by disrupted connections. Instead, the latter computes the best recovery path considering not only the spare capacity but also the released one due to failures. Achieving 100% survivability (i.e., recovery from all possible dual link failures) requires a triconnected network, where three disjoint paths for each connection are required. Since these networks can become extremely expensive since they can require a huge number of network links (i.e., fibers connections), a more realistic case of non-triconnected networks is assumed. In these networks, full network recovery is not be feasible, but achieving the maximum possible survivability is desired. Spare capacity can then be allocated to existing network links, which represents the actual cost of the survivability. We propose optimization models that take into account these different recovery strategies, and demonstrate that restoration has the potential to provide a much better recovery capability with almost the same amount of spare capacity required in protection schemes.Sociedad Argentina de Informática e Investigación Operativa (SADIO

    Survivability Analysis on Non-Triconnected Optical Networks under Dual-Link Failures

    Get PDF
    Survivability of optical networks is considered among the most critical problems that telecommunications operators need to solve at a reasonable cost. Survivability can be enhanced by increasing the amount of network links and its spare capacity, nevertheless this deploys more resources on the network that will be used only under failure scenarios. In other words, these spare resources do not generate any direct profit to network operators as they are reserved to route only disrupted traffic. In particular, the case of dual link failures on fiber optic cables (i.e., fiber cuts) has recently received much attention as repairing these cables typically requires much time, which then increases the probability of a second failure on another link of the network. In this context, survivability schemes can be used to recover the network from a dual link failure scenario. In this work, we analyze the case of protection and restoration schemes, which are two well-known recovery strategies. The former is simpler to implement as it considers a fixed set of backup paths for all failure scenarios; however, it cannot take into account the spare capacity released by disrupted connections. Instead, the latter computes the best recovery path considering not only the spare capacity but also the released one due to failures. Achieving 100% survivability (i.e., recovery from all possible dual link failures) requires a triconnected network, where three disjoint paths for each connection are required. Since these networks can become extremely expensive since they can require a huge number of network links (i.e., fibers connections), a more realistic case of non-triconnected networks is assumed. In these networks, full network recovery is not be feasible, but achieving the maximum possible survivability is desired. Spare capacity can then be allocated to existing network links, which represents the actual cost of the survivability. We propose optimization models that take into account these different recovery strategies, and demonstrate that restoration has the potential to provide a much better recovery capability with almost the same amount of spare capacity required in protection schemes.Sociedad Argentina de Informática e Investigación Operativa (SADIO

    Risk based resilient network design

    Get PDF
    This paper presents a risk-based approach to resilient network design. The basic design problem considered is that given a working network and a fixed budget, how best to allocate the budget for deploying a survivability technique in different parts of the network based on managing the risk. The term risk measures two related quantities: the likelihood of failure or attack, and the amount of damage caused by the failure or attack. Various designs with different risk-based design objectives are considered, for example, minimizing the expected damage, minimizing the maximum damage, and minimizing a measure of the variability of damage that could occur in the network. A design methodology for the proposed risk-based survivable network design approach is presented within an optimization model framework. Numerical results and analysis illustrating the different risk based designs and the tradeoffs among the schemes are presented. © 2011 Springer Science+Business Media, LLC

    Dual-failure Survivability for Multi Quality Data using Single p-cycle

    Get PDF
    Dual-failure scenarios are a real possibility in today’s optical networks and it is becoming more and more important foe carriers and network operators to consider them when designing their networks. p-Cycle is a recent approach in optical network protection. p-Cycle use preconnected cycles of spare capacity to restore affected working traffic. We propose new method and strategies to support multiple qualities of service classes in a static pcycle based network Design, using the same global set of resources as required to operate a network with only a single failure protected service class. A propose new method to provide dual failures survivability using p-cycle. A p-cycle is set up for each link. When link is failure, the system will on-line select the best route of pcycle to react. In single failure or dual failure, data does not divert randomly in route of p-cycle. Data packets are switched based on priority criteria it means that the higher priority packets go through shortest distance route of pcycle and lowest priority data packets go through longest distance route of p-cycle

    Joint dimensioning of server and network infrastructure for resilient optical grids/clouds

    Get PDF
    We address the dimensioning of infrastructure, comprising both network and server resources, for large-scale decentralized distributed systems such as grids or clouds. We design the resulting grid/cloud to be resilient against network link or server failures. To this end, we exploit relocation: Under failure conditions, a grid job or cloud virtual machine may be served at an alternate destination (i.e., different from the one under failure-free conditions). We thus consider grid/cloud requests to have a known origin, but assume a degree of freedom as to where they end up being served, which is the case for grid applications of the bag-of-tasks (BoT) type or hosted virtual machines in the cloud case. We present a generic methodology based on integer linear programming (ILP) that: 1) chooses a given number of sites in a given network topology where to install server infrastructure; and 2) determines the amount of both network and server capacity to cater for both the failure-free scenario and failures of links or nodes. For the latter, we consider either failure-independent (FID) or failure-dependent (FD) recovery. Case studies on European-scale networks show that relocation allows considerable reduction of the total amount of network and server resources, especially in sparse topologies and for higher numbers of server sites. Adopting a failure-dependent backup routing strategy does lead to lower resource dimensions, but only when we adopt relocation (especially for a high number of server sites): Without exploiting relocation, potential savings of FD versus FID are not meaningful

    Differentiated quality-of-recovery and quality-of-protection in survivable WDM mesh networks

    Get PDF
    In the modern telecommunication business, there is a need to provide different Quality-of-Recovery (QoR) and Quality-of-Protection (QoP) classes in order to accommodate as many customers as possible, and to optimize the protection capacity cost. Prevalent protection methods to provide specific QoS related to protection are based on pre-defined shape protection structures (topologies), e.g., p -cycles and p -trees. Although some of these protection patterns are known to provide a good trade-off among the different protection parameters, their shapes can limit their deployment in some specific network conditions, e.g., a constrained link spare capacity budget and traffic distribution. In this thesis, we propose to re-think the design process of protection schemes in survivable WDM networks by adopting a hew design approach where the shapes of the protection structures are decided based on the targeted QoR and QoP guarantees, and not the reverse. We focus on the degree of pre-configuration of the protection topologies, and use fully and partially pre-cross connected p -structures, and dynamically cross connected p -structures. In QoR differentiation, we develop different approaches for pre-configuring the protection capacity in order to strike different balances between the protection cost and the availability requirements in the network; while in the QoP differentiation, we focus on the shaping of the protection structures to provide different grades of protection including single and dual-link failure protection. The new research directions proposed and developed in this thesis are intended to help network operators to effectively support different Quality-of-Recovery and Quality-of-Protection classes. All new ideas have been translated into mathematical models for which we propose practical and efficient design methods in order to optimize the inherent cost to the different designs of protection schemes. Furthermore, we establish a quantitative relation between the degree of pre-configuration of the protection structures and their costs in terms of protection capacity. Our most significant contributions are the design and development of Pre-Configured Protection Structure (p-structure) and Pre-Configured Protection Extended-Tree (p -etree) based schemes. Thanks to the column generation modeling and solution approaches, we propose a new design approach of protection schemes where we deploy just enough protection to provide different quality of recovery and protection classe

    Network protection with service guarantees

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2013.This electronic version was submitted and approved by the author's academic department as part of an electronic thesis pilot project. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from department-submitted PDF version of thesis.Includes bibliographical references (p. 167-174).With the increasing importance of communication networks comes an increasing need to protect against network failures. Traditional network protection has been an "all-or-nothing" approach: after any failure, all network traffic is restored. Due to the cost of providing this full protection, many network operators opt to not provide protection whatsoever. This is especially true in wireless networks, where reserving scarce resources for protection is often too costly. Furthermore, network protection often does not come with guarantees on recovery time, which becomes increasingly important with the widespread use of real-time applications that cannot tolerate long disruptions. This thesis investigates providing protection for mesh networks under a variety of service guarantees, offering significant resource savings over traditional protection schemes. First, we develop a network protection scheme that guarantees a quantifiable minimum grade of service upon a failure within the network. Our scheme guarantees that a fraction q of each demand remains after any single-link failure, at a fraction of the resources required for full protection. We develop both a linear program and algorithms to find the minimum-cost capacity allocation to meet both demand and protection requirements. Subsequently, we develop a novel network protection scheme that provides guarantees on both the fraction of time a flow has full connectivity, as well as a quantifiable minimum grade of service during downtimes. In particular, a flow can be below the full demand for at most a maximum fraction of time; then, it must still support at least a fraction q of the full demand. This is in contrast to current protection schemes that offer either availability-guarantees with no bandwidth guarantees during the down-time, or full protection schemes that offer 100% availability after a single link failure. We show that the multiple availability guaranteed problem is NP-Hard, and develop solutions using both a mixed integer linear program and heuristic algorithms. Next, we consider the problem of providing resource-efficient network protection that guarantees the maximum amount of time that flow can be interrupted after a failure. This is in contrast to schemes that offer no recovery time guarantees, such as IP rerouting, or the prevalent local recovery scheme of Fast ReRoute, which often over-provisions resources to meet recovery time constraints. To meet these recovery time guarantees, we provide a novel and flexible solution by partitioning the network into failure-independent "recovery domains", where within each domain, the maximum amount of time to recover from a failure is guaranteed. Finally, we study the problem of providing protection against failures in wireless networks subject to interference constraints. Typically, protection in wired networks is provided through the provisioning of backup paths. This approach has not been previously considered in the wireless setting due to the prohibitive cost of backup capacity. However, we show that in the presence of interference, protection can often be provided with no loss in throughput. This is due to the fact that after a failure, links that previously interfered with the failed link can be activated, thus leading to a "recapturing" of some of the lost capacity. We provide both an ILP formulation for the optimal solution, as well as algorithms that perform close to optimal.by Gregory Kuperman.Ph.D
    • …
    corecore