6 research outputs found

    A robust optimization approach to backup network design with random failures

    Get PDF
    This paper presents a scheme in which a dedicated backup network is designed to provide protection from random link failures. Upon a link failure in the primary network, traffic is rerouted through a preplanned path in the backup network. We introduce a novel approach for dealing with random link failures, in which probabilistic survivability guarantees are provided to limit capacity over-provisioning. We show that the optimal backup routing strategy in this respect depends on the reliability of the primary network. Specifically, as primary links become less likely to fail, the optimal backup networks employ more resource sharing amongst backup paths. We apply results from the field of robust optimization to formulate an ILP for the design and capacity provisioning of these backup networks. We then propose a simulated annealing heuristic to solve this problem for largescale networks, and present simulation results that verify our analysis and approach.National Science Foundation (U.S.) (grant CNS-0626781)National Science Foundation (U.S.) (grant CNS-0830961)United States. Defense Threat Reduction Agency (grant HDTRA1-07-1-0004)United States. Defense Threat Reduction Agency (grant HDTRA-09-1-005

    インターネットプロトコルネットワークにおけるリンク故障に対するリンク重み最適化モデル

    Get PDF
     As Internet traffic grows with little revenue for service providers, keeping the same service level agreements (SLAs) with limited capital expenditures (CAPEX) is challenging. Major internet service providers backbone traffic grows exponentially while revenue grows logarithmically. Under such situation, CAPEX reduction and an improvement of the infrastructure utilization efficiency are both needed. Link failures are common in Internet Protocol (IP) backbone networks and are an impediment to meeting required quality of service (QoS). After failure occurs, affected traffic is rerouted to adjacent links. This increase in network congestion leads to a reduction of addable traffic and sometimes an increase in packet drop rate. In this thesis network congestion refers to the highest link utilization over all the links in the network. An increase of network congestion may disrupt services with critical SLAs as allowable traffic becomes restricted and packet drop rate increases. Therefore, from a network operator point of view keeping a manageable congestion even under failure is desired A possible approach to deal with congestion increase is to augment link capacity until meeting the manageable congestion threshold. However CAPEX reduction is required. Therefore a minimization of the additional capacity is necessary. In IP networks where OSPF is widely used as a routing protocol, traffic paths are determined by link weights which are preconfigured in advance. Since traffic paths are decided by link weights, links weights therefore decide the links that will get congested. As result they determine the network congestion. Link weights can be optimized in order to minimize the additional capacity under the worst case failure. The worst case failure is the link failure case which generates the highest congestion in the network. In the basic model of link weight optimization, a preventive start-time optimization (PSO) scheme that determines a link weight set to minimize the worst congestion under any single-link failure was presented. Unfortunately, when there is no link failure, that link weight set leads to a congestion that may be higher than the manageable congestion. This is a penalty that will be carried on and thus become a burden especially in networks with few failures. The first part of this thesis proposes a penalty-aware (PA) model that determines a link weight set which reduces that penalty while also reducing the worst congestion by considering both failure and non-failure scenarios. In our PA model we present two simple and effective schemes: preventive start-time optimization without penalty (PSO-NP) and strengthen preventive start-time optimization (S-PSO). PSO-NP suppresses the penalty for the no failure case while reducing the worst congestion under failure, S-PSO minimizes the worst congestion under failure and tries to minimize the penalty compared to PSO for the no failure case. Simulation results show that in several networks, PSO-NP and S-PSO achieve substantial penalty reduction while showing a congestion closed to that of PSO under worst case failure. Despite these facts, PSO-NP and S-PSO do not guarantee an improvement of both the penalty and the worst congestion at the same time as they focus on fixed optimization conditions which restrict the emergence of upgraded solutions for that purpose. A relaxation of these fixed conditions may give us sub-optimal link weight sets that reduce the worst congestion under failure to nearly match that of PSO with a controlled penalty for the no failure case. To determine these sub-optimal sets we expand the penalty-aware model of link weight optimization. We design a scheme where the network operator can set a manageable penalty and find the link weight set that reduces most the worst congestion while maintaining the penalty. This enable network operators to choose more flexible link weight sets accordingly to their requirements under failure and non-failure scenarios. Since setting the penalty to zero would give the same results as PSO-NP, and not setting any penalty condition would give S-PSO, this scheme covers PSO-NP and S-PSO. For this reason we denote it: general preventive start-time optimization (GPSO). Simulation results show that GPSO determines link weight sets with worst congestion reduction equivalent to that of PSO under reduced penalty for the no failure case. GPSO is effective in finding a link weight set that reduces the congestion under both failure and non-failure cases. However it does not guarantee the manageable congestion as it considers penalty. In the second part of this thesis we propose a link-duplication (LD) model that aims to suppress link failure in the first place in order to always meet the manageable congestion. For this purpose we consider the duplication or reinforcement of links which is broadly used to make network reliable. Link duplication provides fast recovery as only switching from the failed link to the backup link will hide the failure at upper layers. However, due to capital expenditure constraints, every link cannot be duplicated. Giving priority to some selected links makes sense. As mentioned above, traffic routes are determined by link weights that are configured in advance. Therefore, choosing an appropriate set of link weights may reduce the number of links that actually need to be duplicated in order to keep a manageable congestion under any single-link failure scenario. Now, PSO also determines the link failure which creates the worst congestion after failure. Since by duplicating this link we can assume it no more fails, PSO can be used to find the smallest number of links to protect so as to guarantee a manageable congestion under any single link failure. The LD model considers multiple protection scenarios before optimizing link weights for the reduction of the overall number of protected links with the congestion of keeping the congestion below the manageable threshold. Simulation results show the LD model delivers a link weight set that requires few link protections to keep the manageable congestion under any single-link failure scenario at the cost of a computation time order L times that of PSO. L represents the number of links in the network. Since the LD model considers additional resources, a fair comparison with the PA model would require considering additional capacity in the PA mode as well. In the third part of this thesis we incorporate additional capacity in the PA model. For the PA model we introduce a mathematical formulation that aims to determine the minimal additional capacity to provide in order to maintain the manageable congestion under any single-link failure scenario. We then compare the LD model to the PA model that incorporates additional capacity features. Evaluation results show that the performance difference between the LD model and the PA model in terms of the required additional capacity depends on the network characteristics. The requirements of latency and continuity for traffic and geographical restriction of services should be taken into consideration when deciding which model to use.電気通信大学201

    A Robust Optimization Approach to Backup Network Design With Random Failures

    No full text
    This paper presents a scheme in which a dedicated backup network is designed to provide protection from random link failures. Upon a link failure in the primary network, traffic is rerouted through a preplanned path in the backup network. We introduce a novel approach for dealing with random link failures, in which probabilistic survivability guarantees are provided to limit capacity over provisioning. We show that the optimal backup routing strategy in this respect depends on the reliability of the primary network. Specifically, as primary links become less likely to fail, the optimal backup networks employ more resource sharing among backup paths. We apply results from the field of robust optimization to formulate an ILP for the design and capacity provisioning of these backup networks. We then propose a simulated annealing heuristic to solve this problem for large-scale networks and present simulation results that verify our analysis and approach.National Science Foundation (U.S.) (grant CNS-0626781)National Science Foundation (U.S.) (grant CNS-0830961)United States. Defense Threat Reduction Agency (grant HDTRA1-07-1-0004)United States. Defense Threat Reduction Agency (grant HDTRA-09-1-005

    A Robust Optimization Approach to Backup Network Design With Random Failures

    No full text
    corecore