6,396 research outputs found

    Spare capacity allocation using shared backup path protection for dual link failures

    Get PDF
    This paper extends the spare capacity allocation (SCA) problem from single link failure [1] to dual link failures on mesh-like IP or WDM networks. The SCA problem pre-plans traffic flows with mutually disjoint one working and two backup paths using the shared backup path protection (SBPP) scheme. The aggregated spare provision matrix (SPM) is used to capture the spare capacity sharing for dual link failures. Comparing to a previous work by He and Somani [2], this method has better scalability and flexibility. The SCA problem is formulated in a non-linear integer programming model and partitioned into two sequential linear sub-models: one finds all primary backup paths first, and the other finds all secondary backup paths next. The results on five networks show that the network redundancy using dedicated 1+1+1 is in the range of 313-400%. It drops to 96-181% in 1:1:1 without loss of dual-link resiliency, but with the trade-off of using the complicated share capacity sharing among backup paths. The hybrid 1+1:1 provides intermediate redundancy ratio at 187-310% with a moderate complexity. We also compare the passive/active approaches which consider spare capacity sharing after/during the backup path routing process. The active sharing approaches always achieve lower redundancy values than the passive ones. These reduction percentages are about 12% for 1+1:1 and 25% for 1:1:1 respectively

    Optimization of Free Space Optical Wireless Network for Cellular Backhauling

    Full text link
    With densification of nodes in cellular networks, free space optic (FSO) connections are becoming an appealing low cost and high rate alternative to copper and fiber as the backhaul solution for wireless communication systems. To ensure a reliable cellular backhaul, provisions for redundant, disjoint paths between the nodes must be made in the design phase. This paper aims at finding a cost-effective solution to upgrade the cellular backhaul with pre-deployed optical fibers using FSO links and mirror components. Since the quality of the FSO links depends on several factors, such as transmission distance, power, and weather conditions, we adopt an elaborate formulation to calculate link reliability. We present a novel integer linear programming model to approach optimal FSO backhaul design, guaranteeing KK-disjoint paths connecting each node pair. Next, we derive a column generation method to a path-oriented mathematical formulation. Applying the method in a sequential manner enables high computational scalability. We use realistic scenarios to demonstrate our approaches efficiently provide optimal or near-optimal solutions, and thereby allow for accurately dealing with the trade-off between cost and reliability

    Joint dimensioning of server and network infrastructure for resilient optical grids/clouds

    Get PDF
    We address the dimensioning of infrastructure, comprising both network and server resources, for large-scale decentralized distributed systems such as grids or clouds. We design the resulting grid/cloud to be resilient against network link or server failures. To this end, we exploit relocation: Under failure conditions, a grid job or cloud virtual machine may be served at an alternate destination (i.e., different from the one under failure-free conditions). We thus consider grid/cloud requests to have a known origin, but assume a degree of freedom as to where they end up being served, which is the case for grid applications of the bag-of-tasks (BoT) type or hosted virtual machines in the cloud case. We present a generic methodology based on integer linear programming (ILP) that: 1) chooses a given number of sites in a given network topology where to install server infrastructure; and 2) determines the amount of both network and server capacity to cater for both the failure-free scenario and failures of links or nodes. For the latter, we consider either failure-independent (FID) or failure-dependent (FD) recovery. Case studies on European-scale networks show that relocation allows considerable reduction of the total amount of network and server resources, especially in sparse topologies and for higher numbers of server sites. Adopting a failure-dependent backup routing strategy does lead to lower resource dimensions, but only when we adopt relocation (especially for a high number of server sites): Without exploiting relocation, potential savings of FD versus FID are not meaningful

    Resilience options for provisioning anycast cloud services with virtual optical networks

    Get PDF
    Optical networks are crucial to support increasingly demanding cloud services. Delivering the requested quality of services (in particular latency) is key to successfully provisioning end-to-end services in clouds. Therefore, as for traditional optical network services, it is of utter importance to guarantee that clouds are resilient to any failure of either network infrastructure (links and/or nodes) or data centers. A crucial concept in establishing cloud services is that of network virtualization: the physical infrastructure is logically partitioned in separate virtual networks. To guarantee end-to-end resilience for cloud services in such a set-up, we need to simultaneously route the services and map the virtual network, in such a way that an alternate routing in case of physical resource failures is always available. Note that combined control of the network and data center resources is exploited, and the anycast routing concept applies: we can choose the data center to provide server resources requested by the customer to optimize resource usage and/or resiliency. This paper investigates the design of scalable optimization models to perform the virtual network mapping resiliently. We compare various resilience options, and analyze their compromise between bandwidth requirements and resiliency quality

    Network Design with Coverage Costs

    Get PDF
    We study network design with a cost structure motivated by redundancy in data traffic. We are given a graph, g groups of terminals, and a universe of data packets. Each group of terminals desires a subset of the packets from its respective source. The cost of routing traffic on any edge in the network is proportional to the total size of the distinct packets that the edge carries. Our goal is to find a minimum cost routing. We focus on two settings. In the first, the collection of packet sets desired by source-sink pairs is laminar. For this setting, we present a primal-dual based 2-approximation, improving upon a logarithmic approximation due to Barman and Chawla (2012). In the second setting, packet sets can have non-trivial intersection. We focus on the case where each packet is desired by either a single terminal group or by all of the groups, and the graph is unweighted. For this setting we present an O(log g)-approximation. Our approximation for the second setting is based on a novel spanner-type construction in unweighted graphs that, given a collection of g vertex subsets, finds a subgraph of cost only a constant factor more than the minimum spanning tree of the graph, such that every subset in the collection has a Steiner tree in the subgraph of cost at most O(log g) that of its minimum Steiner tree in the original graph. We call such a subgraph a group spanner.Comment: Updated version with additional result

    Overlay Protection Against Link Failures Using Network Coding

    Get PDF
    This paper introduces a network coding-based protection scheme against single and multiple link failures. The proposed strategy ensures that in a connection, each node receives two copies of the same data unit: one copy on the working circuit, and a second copy that can be extracted from linear combinations of data units transmitted on a shared protection path. This guarantees instantaneous recovery of data units upon the failure of a working circuit. The strategy can be implemented at an overlay layer, which makes its deployment simple and scalable. While the proposed strategy is similar in spirit to the work of Kamal '07 & '10, there are significant differences. In particular, it provides protection against multiple link failures. The new scheme is simpler, less expensive, and does not require the synchronization required by the original scheme. The sharing of the protection circuit by a number of connections is the key to the reduction of the cost of protection. The paper also conducts a comparison of the cost of the proposed scheme to the 1+1 and shared backup path protection (SBPP) strategies, and establishes the benefits of our strategy.Comment: 14 pages, 10 figures, accepted by IEEE/ACM Transactions on Networkin

    Exploring the benefit of rerouting multi-period traffic to multi-site data centers

    Get PDF
    In cloud-like scenarios, demand is served at one of multiple possible data center (DC) destinations. Usually, the exact DC that is used can be freely chosen, which leads to an anycast routing problem. Furthermore, the demand volume is expected to change over time, e.g., following a diurnal pattern. Given that virtually all application domains today rely heavily on cloud-like services, it is important that the backbone networks connecting users to the DCs are resilient against failures. In this paper, we consider the problem of resiliently routing multi-period traffic: we need to find routes to both a primary DC and a backup DC (to be used in the case of failure of the primary one, or of the network connection to it), and also account for synchronization traffic between the primary and backup DCs. We formulate this as an optimization problem and adopt column generation, using a path formulation in two sub-problems: the (restricted) master problem selects "configurations" to use for each demand in each of the time epochs it lasts, while the pricing problem (PP) constructs a new "configuration" that can lead to lower overall costs (which we express as the number of network resources, i.e., bandwidth, required to serve the demand). Here, a "configuration" is defined by the network paths followed from the demand source to each of the two selected DCs, as well as that of the synchronization traffic in between the DCs. Our decomposition allows for PPs to be solved in parallel, for which we quantitatively explore the reduction in the time required to solve the overall routing problem. The key question that we address with our model is an exploration of the potential benefits of rerouting traffic from one time epoch to the next: we compare several (re) routing strategies, allowing traffic that spans multiple time periods to i) not be rerouted in different periods, ii) only change the backup DC and routes, or iii) freely change both primary and backup DC choices and the routes toward them
    • …
    corecore