20,317 research outputs found

    Disaster-Resilient Control Plane Design and Mapping in Software-Defined Networks

    Full text link
    Communication networks, such as core optical networks, heavily depend on their physical infrastructure, and hence they are vulnerable to man-made disasters, such as Electromagnetic Pulse (EMP) or Weapons of Mass Destruction (WMD) attacks, as well as to natural disasters. Large-scale disasters may cause huge data loss and connectivity disruption in these networks. As our dependence on network services increases, the need for novel survivability methods to mitigate the effects of disasters on communication networks becomes a major concern. Software-Defined Networking (SDN), by centralizing control logic and separating it from physical equipment, facilitates network programmability and opens up new ways to design disaster-resilient networks. On the other hand, to fully exploit the potential of SDN, along with data-plane survivability, we also need to design the control plane to be resilient enough to survive network failures caused by disasters. Several distributed SDN controller architectures have been proposed to mitigate the risks of overload and failure, but they are optimized for limited faults without addressing the extent of large-scale disaster failures. For disaster resiliency of the control plane, we propose to design it as a virtual network, which can be solved using Virtual Network Mapping techniques. We select appropriate mapping of the controllers over the physical network such that the connectivity among the controllers (controller-to-controller) and between the switches to the controllers (switch-to-controllers) is not compromised by physical infrastructure failures caused by disasters. We formally model this disaster-aware control-plane design and mapping problem, and demonstrate a significant reduction in the disruption of controller-to-controller and switch-to-controller communication channels using our approach.Comment: 6 page

    Resilience options for provisioning anycast cloud services with virtual optical networks

    Get PDF
    Optical networks are crucial to support increasingly demanding cloud services. Delivering the requested quality of services (in particular latency) is key to successfully provisioning end-to-end services in clouds. Therefore, as for traditional optical network services, it is of utter importance to guarantee that clouds are resilient to any failure of either network infrastructure (links and/or nodes) or data centers. A crucial concept in establishing cloud services is that of network virtualization: the physical infrastructure is logically partitioned in separate virtual networks. To guarantee end-to-end resilience for cloud services in such a set-up, we need to simultaneously route the services and map the virtual network, in such a way that an alternate routing in case of physical resource failures is always available. Note that combined control of the network and data center resources is exploited, and the anycast routing concept applies: we can choose the data center to provide server resources requested by the customer to optimize resource usage and/or resiliency. This paper investigates the design of scalable optimization models to perform the virtual network mapping resiliently. We compare various resilience options, and analyze their compromise between bandwidth requirements and resiliency quality

    Joint dimensioning of server and network infrastructure for resilient optical grids/clouds

    Get PDF
    We address the dimensioning of infrastructure, comprising both network and server resources, for large-scale decentralized distributed systems such as grids or clouds. We design the resulting grid/cloud to be resilient against network link or server failures. To this end, we exploit relocation: Under failure conditions, a grid job or cloud virtual machine may be served at an alternate destination (i.e., different from the one under failure-free conditions). We thus consider grid/cloud requests to have a known origin, but assume a degree of freedom as to where they end up being served, which is the case for grid applications of the bag-of-tasks (BoT) type or hosted virtual machines in the cloud case. We present a generic methodology based on integer linear programming (ILP) that: 1) chooses a given number of sites in a given network topology where to install server infrastructure; and 2) determines the amount of both network and server capacity to cater for both the failure-free scenario and failures of links or nodes. For the latter, we consider either failure-independent (FID) or failure-dependent (FD) recovery. Case studies on European-scale networks show that relocation allows considerable reduction of the total amount of network and server resources, especially in sparse topologies and for higher numbers of server sites. Adopting a failure-dependent backup routing strategy does lead to lower resource dimensions, but only when we adopt relocation (especially for a high number of server sites): Without exploiting relocation, potential savings of FD versus FID are not meaningful

    Energy efficiency considerations in integrated IT and optical network resilient infrastructures

    Get PDF
    The European Integrated Project GEYSERS - Generalised Architecture for Dynamic Infrastructure Services - is concentrating on infrastructures incorporating integrated optical network and IT resources in support of the Future Internet with special emphasis on cloud computing. More specifically GEYSERS proposes the concept of Virtual Infrastructures over one or more interconnected Physical Infrastructures comprising both network and IT resources. Taking into consideration the energy consumption levels associated with the ICT today and the expansion of the Internet in size and complexity, that incurring increased energy consumption of both IT and network resources, energy efficient infrastructure design becomes critical. To address this need, in the framework of GEYSERS, we propose energy efficient design of infrastructures incorporating integrated optical network and IT resources, supporting resilient end-to-end services. Our modeling results quantify significant energy savings of the proposed solution by jointly optimizing the allocation of both network and IT resources

    Scalable algorithms for QoS-aware virtual network mapping for cloud services

    Get PDF
    Both business and consumer applications increasingly depend on cloud solutions. Yet, many are still reluctant to move to cloud-based solutions, mainly due to concerns of service quality and reliability. Since cloud platforms depend both on IT resources (located in data centers, DCs) and network infrastructure connecting to it, both QoS and resilience should be offered with end-to-end guarantees up to and including the server resources. The latter currently is largely impeded by the fact that the network and cloud DC domains are typically operated by disjoint entities. Network virtualization, together with combined control of network and IT resources can solve that problem. Here, we formally state the combined network and IT provisioning problem for a set of virtual networks, incorporating resilience as well as QoS in physical and virtual layers. We provide a scalable column generation model, to address real world network sizes. We analyze the latter in extensive case studies, to answer the question at which layer to provision QoS and resilience in virtual networks for cloud services

    Server Placement with Shared Backups for Disaster-Resilient Clouds

    Full text link
    A key strategy to build disaster-resilient clouds is to employ backups of virtual machines in a geo-distributed infrastructure. Today, the continuous and acknowledged replication of virtual machines in different servers is a service provided by different hypervisors. This strategy guarantees that the virtual machines will have no loss of disk and memory content if a disaster occurs, at a cost of strict bandwidth and latency requirements. Considering this kind of service, in this work, we propose an optimization problem to place servers in a wide area network. The goal is to guarantee that backup machines do not fail at the same time as their primary counterparts. In addition, by using virtualization, we also aim to reduce the amount of backup servers required. The optimal results, achieved in real topologies, reduce the number of backup servers by at least 40%. Moreover, this work highlights several characteristics of the backup service according to the employed network, such as the fulfillment of latency requirements.Comment: Computer Networks 201

    Risk based resilient network design

    Get PDF
    This paper presents a risk-based approach to resilient network design. The basic design problem considered is that given a working network and a fixed budget, how best to allocate the budget for deploying a survivability technique in different parts of the network based on managing the risk. The term risk measures two related quantities: the likelihood of failure or attack, and the amount of damage caused by the failure or attack. Various designs with different risk-based design objectives are considered, for example, minimizing the expected damage, minimizing the maximum damage, and minimizing a measure of the variability of damage that could occur in the network. A design methodology for the proposed risk-based survivable network design approach is presented within an optimization model framework. Numerical results and analysis illustrating the different risk based designs and the tradeoffs among the schemes are presented. © 2011 Springer Science+Business Media, LLC

    Resilient Backhaul Network Design Using Hybrid Radio/Free-Space Optical Technology

    Full text link
    The radio-frequency (RF) technology is a scalable solution for the backhaul planning. However, its performance is limited in terms of data rate and latency. Free Space Optical (FSO) backhaul, on the other hand, offers a higher data rate but is sensitive to weather conditions. To combine the advantages of RF and FSO backhauls, this paper proposes a cost-efficient backhaul network using the hybrid RF/FSO technology. To ensure a resilient backhaul, the paper imposes a given degree of redundancy by connecting each node through KK link-disjoint paths so as to cope with potential link failures. Hence, the network planning problem considered in this paper is the one of minimizing the total deployment cost by choosing the appropriate link type, i.e., either hybrid RF/FSO or optical fiber (OF), between each couple of base-stations while guaranteeing KK link-disjoint connections, a data rate target, and a reliability threshold. The paper solves the problem using graph theory techniques. It reformulates the problem as a maximum weight clique problem in the planning graph, under a specified realistic assumption about the cost of OF and hybrid RF/FSO links. Simulation results show the cost of the different planning and suggest that the proposed heuristic solution has a close-to-optimal performance for a significant gain in computation complexity
    • …
    corecore