1,089 research outputs found

    Risk based resilient network design

    Get PDF
    This paper presents a risk-based approach to resilient network design. The basic design problem considered is that given a working network and a fixed budget, how best to allocate the budget for deploying a survivability technique in different parts of the network based on managing the risk. The term risk measures two related quantities: the likelihood of failure or attack, and the amount of damage caused by the failure or attack. Various designs with different risk-based design objectives are considered, for example, minimizing the expected damage, minimizing the maximum damage, and minimizing a measure of the variability of damage that could occur in the network. A design methodology for the proposed risk-based survivable network design approach is presented within an optimization model framework. Numerical results and analysis illustrating the different risk based designs and the tradeoffs among the schemes are presented. Ā© 2011 Springer Science+Business Media, LLC

    Joint dimensioning of server and network infrastructure for resilient optical grids/clouds

    Get PDF
    We address the dimensioning of infrastructure, comprising both network and server resources, for large-scale decentralized distributed systems such as grids or clouds. We design the resulting grid/cloud to be resilient against network link or server failures. To this end, we exploit relocation: Under failure conditions, a grid job or cloud virtual machine may be served at an alternate destination (i.e., different from the one under failure-free conditions). We thus consider grid/cloud requests to have a known origin, but assume a degree of freedom as to where they end up being served, which is the case for grid applications of the bag-of-tasks (BoT) type or hosted virtual machines in the cloud case. We present a generic methodology based on integer linear programming (ILP) that: 1) chooses a given number of sites in a given network topology where to install server infrastructure; and 2) determines the amount of both network and server capacity to cater for both the failure-free scenario and failures of links or nodes. For the latter, we consider either failure-independent (FID) or failure-dependent (FD) recovery. Case studies on European-scale networks show that relocation allows considerable reduction of the total amount of network and server resources, especially in sparse topologies and for higher numbers of server sites. Adopting a failure-dependent backup routing strategy does lead to lower resource dimensions, but only when we adopt relocation (especially for a high number of server sites): Without exploiting relocation, potential savings of FD versus FID are not meaningful

    Unidirectional Quorum-based Cycle Planning for Efficient Resource Utilization and Fault-Tolerance

    Full text link
    In this paper, we propose a greedy cycle direction heuristic to improve the generalized R\mathbf{R} redundancy quorum cycle technique. When applied using only single cycles rather than the standard paired cycles, the generalized R\mathbf{R} redundancy technique has been shown to almost halve the necessary light-trail resources in the network. Our greedy heuristic improves this cycle-based routing technique's fault-tolerance and dependability. For efficiency and distributed control, it is common in distributed systems and algorithms to group nodes into intersecting sets referred to as quorum sets. Optimal communication quorum sets forming optical cycles based on light-trails have been shown to flexibly and efficiently route both point-to-point and multipoint-to-multipoint traffic requests. Commonly cycle routing techniques will use pairs of cycles to achieve both routing and fault-tolerance, which uses substantial resources and creates the potential for underutilization. Instead, we use a single cycle and intentionally utilize R\mathbf{R} redundancy within the quorum cycles such that every point-to-point communication pairs occur in at least R\mathbf{R} cycles. Without the paired cycles the direction of the quorum cycles becomes critical to the fault tolerance performance. For this we developed a greedy cycle direction heuristic and our single fault network simulations show a reduction of missing pairs by greater than 30%, which translates to significant improvements in fault coverage.Comment: Computer Communication and Networks (ICCCN), 2016 25th International Conference on. arXiv admin note: substantial text overlap with arXiv:1608.05172, arXiv:1608.05168, arXiv:1608.0517

    Spare capacity allocation using shared backup path protection for dual link failures

    Get PDF
    This paper extends the spare capacity allocation (SCA) problem from single link failure [1] to dual link failures on mesh-like IP or WDM networks. The SCA problem pre-plans traffic flows with mutually disjoint one working and two backup paths using the shared backup path protection (SBPP) scheme. The aggregated spare provision matrix (SPM) is used to capture the spare capacity sharing for dual link failures. Comparing to a previous work by He and Somani [2], this method has better scalability and flexibility. The SCA problem is formulated in a non-linear integer programming model and partitioned into two sequential linear sub-models: one finds all primary backup paths first, and the other finds all secondary backup paths next. The results on five networks show that the network redundancy using dedicated 1+1+1 is in the range of 313-400%. It drops to 96-181% in 1:1:1 without loss of dual-link resiliency, but with the trade-off of using the complicated share capacity sharing among backup paths. The hybrid 1+1:1 provides intermediate redundancy ratio at 187-310% with a moderate complexity. We also compare the passive/active approaches which consider spare capacity sharing after/during the backup path routing process. The active sharing approaches always achieve lower redundancy values than the passive ones. These reduction percentages are about 12% for 1+1:1 and 25% for 1:1:1 respectively

    Optimization in Telecommunication Networks

    Get PDF
    Network design and network synthesis have been the classical optimization problems intelecommunication for a long time. In the recent past, there have been many technologicaldevelopments such as digitization of information, optical networks, internet, and wirelessnetworks. These developments have led to a series of new optimization problems. Thismanuscript gives an overview of the developments in solving both classical and moderntelecom optimization problems.We start with a short historical overview of the technological developments. Then,the classical (still actual) network design and synthesis problems are described with anemphasis on the latest developments on modelling and solving them. Classical results suchas Mengerā€™s disjoint paths theorem, and Ford-Fulkersonā€™s max-flow-min-cut theorem, butalso Gomory-Hu trees and the Okamura-Seymour cut-condition, will be related to themodels described. Finally, we describe recent optimization problems such as routing andwavelength assignment, and grooming in optical networks.operations research and management science;
    • ā€¦
    corecore