5,257 research outputs found

    Robust Energy Management for Green and Survivable IP Networks

    Get PDF
    Despite the growing necessity to make Internet greener, it is worth pointing out that energy-aware strategies to minimize network energy consumption must not undermine the normal network operation. In particular, two very important issues that may limit the application of green networking techniques concern, respectively, network survivability, i.e. the network capability to react to device failures, and robustness to traffic variations. We propose novel modelling techniques to minimize the daily energy consumption of IP networks, while explicitly guaranteeing, in addition to typical QoS requirements, both network survivability and robustness to traffic variations. The impact of such limitations on final network consumption is exhaustively investigated. Daily traffic variations are modelled by dividing a single day into multiple time intervals (multi-period problem), and network consumption is reduced by putting to sleep idle line cards and chassis. To preserve network resiliency we consider two different protection schemes, i.e. dedicated and shared protection, according to which a backup path is assigned to each demand and a certain amount of spare capacity has to be available on each link. Robustness to traffic variations is provided by means of a specific modelling framework that allows to tune the conservatism degree of the solutions and to take into account load variations of different magnitude. Furthermore, we impose some inter-period constraints necessary to guarantee network stability and preserve the device lifetime. Both exact and heuristic methods are proposed. Experimentations carried out with realistic networks operated with flow-based routing protocols (i.e. MPLS) show that significant savings, up to 30%, can be achieved also when both survivability and robustness are fully guaranteed

    Energy management in communication networks: a journey through modelling and optimization glasses

    Full text link
    The widespread proliferation of Internet and wireless applications has produced a significant increase of ICT energy footprint. As a response, in the last five years, significant efforts have been undertaken to include energy-awareness into network management. Several green networking frameworks have been proposed by carefully managing the network routing and the power state of network devices. Even though approaches proposed differ based on network technologies and sleep modes of nodes and interfaces, they all aim at tailoring the active network resources to the varying traffic needs in order to minimize energy consumption. From a modeling point of view, this has several commonalities with classical network design and routing problems, even if with different objectives and in a dynamic context. With most researchers focused on addressing the complex and crucial technological aspects of green networking schemes, there has been so far little attention on understanding the modeling similarities and differences of proposed solutions. This paper fills the gap surveying the literature with optimization modeling glasses, following a tutorial approach that guides through the different components of the models with a unified symbolism. A detailed classification of the previous work based on the modeling issues included is also proposed

    Resilient network dimensioning for optical grid/clouds using relocation

    Get PDF
    In this paper we address the problem of dimensioning infrastructure, comprising both network and server resources, for large-scale decentralized distributed systems such as grids or clouds. We will provide an overview of our work in this area, and in particular focus on how to design the resulting grid/cloud to be resilient against network link and/or server site failures. To this end, we will exploit relocation: under failure conditions, a request may be sent to an alternate destination than the one under failure-free conditions. We will provide a comprehensive overview of related work in this area, and focus in some detail on our own most recent work. The latter comprises a case study where traffic has a known origin, but we assume a degree of freedom as to where its end up being processed, which is typically the case for e. g., grid applications of the bag-of-tasks (BoT) type or for providing cloud services. In particular, we will provide in this paper a new integer linear programming (ILP) formulation to solve the resilient grid/cloud dimensioning problem using failure-dependent backup routes. Our algorithm will simultaneously decide on server and network capacity. We find that in the anycast routing problem we address, the benefit of using failure-dependent (FD) rerouting is limited compared to failure-independent (FID) backup routing. We confirm our earlier findings in terms of network capacity savings achieved by relocation compared to not exploiting relocation (order of 6-10% in the current case studies)

    Joint dimensioning of server and network infrastructure for resilient optical grids/clouds

    Get PDF
    We address the dimensioning of infrastructure, comprising both network and server resources, for large-scale decentralized distributed systems such as grids or clouds. We design the resulting grid/cloud to be resilient against network link or server failures. To this end, we exploit relocation: Under failure conditions, a grid job or cloud virtual machine may be served at an alternate destination (i.e., different from the one under failure-free conditions). We thus consider grid/cloud requests to have a known origin, but assume a degree of freedom as to where they end up being served, which is the case for grid applications of the bag-of-tasks (BoT) type or hosted virtual machines in the cloud case. We present a generic methodology based on integer linear programming (ILP) that: 1) chooses a given number of sites in a given network topology where to install server infrastructure; and 2) determines the amount of both network and server capacity to cater for both the failure-free scenario and failures of links or nodes. For the latter, we consider either failure-independent (FID) or failure-dependent (FD) recovery. Case studies on European-scale networks show that relocation allows considerable reduction of the total amount of network and server resources, especially in sparse topologies and for higher numbers of server sites. Adopting a failure-dependent backup routing strategy does lead to lower resource dimensions, but only when we adopt relocation (especially for a high number of server sites): Without exploiting relocation, potential savings of FD versus FID are not meaningful

    Ant-based Survivable Routing in Dynamic WDM Networks with Shared Backup Paths

    Get PDF

    Spare capacity allocation using shared backup path protection for dual link failures

    Get PDF
    This paper extends the spare capacity allocation (SCA) problem from single link failure [1] to dual link failures on mesh-like IP or WDM networks. The SCA problem pre-plans traffic flows with mutually disjoint one working and two backup paths using the shared backup path protection (SBPP) scheme. The aggregated spare provision matrix (SPM) is used to capture the spare capacity sharing for dual link failures. Comparing to a previous work by He and Somani [2], this method has better scalability and flexibility. The SCA problem is formulated in a non-linear integer programming model and partitioned into two sequential linear sub-models: one finds all primary backup paths first, and the other finds all secondary backup paths next. The results on five networks show that the network redundancy using dedicated 1+1+1 is in the range of 313-400%. It drops to 96-181% in 1:1:1 without loss of dual-link resiliency, but with the trade-off of using the complicated share capacity sharing among backup paths. The hybrid 1+1:1 provides intermediate redundancy ratio at 187-310% with a moderate complexity. We also compare the passive/active approaches which consider spare capacity sharing after/during the backup path routing process. The active sharing approaches always achieve lower redundancy values than the passive ones. These reduction percentages are about 12% for 1+1:1 and 25% for 1:1:1 respectively

    Selecting the best locations for data centers in resilient optical grid/cloud dimensioning

    Get PDF
    For optical grid/cloud scenarios, the dimensioning problem comprises not only deciding on the network dimensions (i.e., link bandwidths), but also choosing appropriate locations to install server infrastructure (i.e., data centers), as well as determining the amount of required server resources (for storage and/or processing). Given that users of such grid/cloud systems in general do not care about the exact physical locations of the server resources, a degree of freedom arises in choosing for each of their requests the most appropriate server location. We will exploit this anycast routing principle (i.e., source of traffic is given, but destination can be chosen rather freely) also to provide resilience: traffic may be relocated to alternate destinations in case of network/server failures. In this study, we propose to jointly optimize the link dimensioning and the location of the servers in an optical grid/cloud, where the anycast principle is applied for resiliency against either link or server node failures. While the data center location problem has some resemblance with either the classical p-center or k-means location problems, the anycast principle makes it much more difficult due to the requirement of link disjoint paths for ensuring grid resiliency
    corecore