4,426 research outputs found

    Logical topology design for IP rerouting: ASONs versus static OTNs

    Get PDF
    IP-based backbone networks are gradually moving to a network model consisting of high-speed routers that are flexibly interconnected by a mesh of light paths set up by an optical transport network that consists of wavelength division multiplexing (WDM) links and optical cross-connects. In such a model, the generalized MPLS protocol suite could provide the IP centric control plane component that will be used to deliver rapid and dynamic circuit provisioning of end-to-end optical light paths between the routers. This is called an automatic switched optical (transport) network (ASON). An ASON enables reconfiguration of the logical IP topology by setting up and tearing down light paths. This allows to up- or downgrade link capacities during a router failure to the capacities needed by the new routing of the affected traffic. Such survivability against (single) IP router failures is cost-effective, as capacity to the IP layer can be provided flexibly when necessary. We present and investigate a logical topology optimization problem that minimizes the total amount or cost of the needed resources (interfaces, wavelengths, WDM line-systems, amplifiers, etc.) in both the IP and the optical layer. A novel optimization aspect in this problem is the possibility, as a result of the ASON, to reuse the physical resources (like interface cards and WDM line-systems) over the different network states (the failure-free and all the router failure scenarios). We devised a simple optimization strategy to investigate the cost of the ASON approach and compare it with other schemes that survive single router failures

    Joint dimensioning of server and network infrastructure for resilient optical grids/clouds

    Get PDF
    We address the dimensioning of infrastructure, comprising both network and server resources, for large-scale decentralized distributed systems such as grids or clouds. We design the resulting grid/cloud to be resilient against network link or server failures. To this end, we exploit relocation: Under failure conditions, a grid job or cloud virtual machine may be served at an alternate destination (i.e., different from the one under failure-free conditions). We thus consider grid/cloud requests to have a known origin, but assume a degree of freedom as to where they end up being served, which is the case for grid applications of the bag-of-tasks (BoT) type or hosted virtual machines in the cloud case. We present a generic methodology based on integer linear programming (ILP) that: 1) chooses a given number of sites in a given network topology where to install server infrastructure; and 2) determines the amount of both network and server capacity to cater for both the failure-free scenario and failures of links or nodes. For the latter, we consider either failure-independent (FID) or failure-dependent (FD) recovery. Case studies on European-scale networks show that relocation allows considerable reduction of the total amount of network and server resources, especially in sparse topologies and for higher numbers of server sites. Adopting a failure-dependent backup routing strategy does lead to lower resource dimensions, but only when we adopt relocation (especially for a high number of server sites): Without exploiting relocation, potential savings of FD versus FID are not meaningful

    Analysis of resource sharing in transparent networks

    Get PDF
    Transparent optical networking promises a cost-efficient solution for future core and metro networks because of the efficacy of switching high-granularity trunk traffic without opto-electronic conversion. Network availability is an important performance parameter for network operators, who are incorporating protection and restoration mechanisms in the network to achieve competitive advantages. This paper focuses on the reduction in Capital Expenditures (CapEx) expected from implementing sharing of backup resources in path-protected transparent networks. We dimension a nationwide network topology for different protection mechanisms using transparent and opaque architectures. We investigate the CapEx reductions obtained through protection sharing on a population of 1000 randomly generated biconnected planar topologies with 14 nodes. We show that the gain for transparent networks is heavily dependent on the offered load, with almost no relative gain for low load (no required parallel line systems). We also show that for opaque networks the CapEx reduction through protection sharing is independent of the traffic load and shows only a small dependency on the number of links in the network. The node CapEx reduction for high load (relative to the number of channels in a line system) is comparable to the CapEx reduction in opaque OTN systems. This is rather surprising as in OTN systems the number of transceivers and linecards and the size of the OTN switching matrix all decrease, while in transparent networks only the degree of the ROADM (number and size of WSSs in the node) decreases while the number of transponders remains the same

    Energy saving market for mobile operators

    Full text link
    Ensuring seamless coverage accounts for the lion's share of the energy consumed in a mobile network. Overlapping coverage of three to five mobile network operators (MNOs) results in enormous amount of energy waste which is avoidable. The traffic demands of the mobile networks vary significantly throughout the day. As the offered load for all networks are not same at a given time and the differences in energy consumption at different loads are significant, multi-MNO capacity/coverage sharing can dramatically reduce energy consumption of mobile networks and provide the MNOs a cost effective means to cope with the exponential growth of traffic. In this paper, we propose an energy saving market for a multi-MNO network scenario. As the competing MNOs are not comfortable with information sharing, we propose a double auction clearinghouse market mechanism where MNOs sell and buy capacity in order to minimize energy consumption. In our setting, each MNO proposes its bids and asks simultaneously for buying and selling multi-unit capacities respectively to an independent auctioneer, i.e., clearinghouse and ends up either as a buyer or as a seller in each round. We show that the mechanism allows the MNOs to save significant percentage of energy cost throughout a wide range of network load. Different than other energy saving features such as cell sleep or antenna muting which can not be enabled at heavy traffic load, dynamic capacity sharing allows MNOs to handle traffic bursts with energy saving opportunity.Comment: 6 pages, 2 figures, to be published in ICC 2015 workshop on Next Generation Green IC

    Dimensioning backbone networks for multi-site data centers: exploiting anycast routing for resilience

    Get PDF
    In the current era of big data, applications increasingly rely on powerful computing infrastructure residing in large data centers (DCs), often adopting cloud computing technology. Clearly, this necessitates efficient and resilient networking infrastructure to connect the users of these applications with the data centers hosting them. In this paper, we focus on backbone network infrastructure on large geographical scales (i.e., the so-called wide area networks), which typically adopts optical network technology. In particular, we study the problem of dimensioning such backbone networks: what bandwidth should each of the links provide for the traffic, originating at known sources, to reach the data centers? And possibly even: how many such DCs should we deploy, and at what locations? More concretely, we summarize our recent work that essentially addresses the following fundamental research questions: (1) Does the anycast routing strategy influence the amount of required network resources? (2) Can we exploit anycast routing for resilience purposes, i.e., relocate to a different DC under failure conditions, to reduce resource capacity requirements? (3) Is it advantageous to change anycast request destinations from one DC location to the other, from one time period to the next, if service requests vary over time

    Optimizations in Heterogeneous Mobile Networks

    Get PDF

    Dynamic Resource Management in Clouds: A Probabilistic Approach

    Full text link
    Dynamic resource management has become an active area of research in the Cloud Computing paradigm. Cost of resources varies significantly depending on configuration for using them. Hence efficient management of resources is of prime interest to both Cloud Providers and Cloud Users. In this work we suggest a probabilistic resource provisioning approach that can be exploited as the input of a dynamic resource management scheme. Using a Video on Demand use case to justify our claims, we propose an analytical model inspired from standard models developed for epidemiology spreading, to represent sudden and intense workload variations. We show that the resulting model verifies a Large Deviation Principle that statistically characterizes extreme rare events, such as the ones produced by "buzz/flash crowd effects" that may cause workload overflow in the VoD context. This analysis provides valuable insight on expectable abnormal behaviors of systems. We exploit the information obtained using the Large Deviation Principle for the proposed Video on Demand use-case for defining policies (Service Level Agreements). We believe these policies for elastic resource provisioning and usage may be of some interest to all stakeholders in the emerging context of cloud networkingComment: IEICE Transactions on Communications (2012). arXiv admin note: substantial text overlap with arXiv:1209.515

    Hybrid Wavelength Routed and Optical Packet Switched Ring Networks for the Metropolitan Area Network

    Get PDF

    A Survey of Green Networking Research

    Full text link
    Reduction of unnecessary energy consumption is becoming a major concern in wired networking, because of the potential economical benefits and of its expected environmental impact. These issues, usually referred to as "green networking", relate to embedding energy-awareness in the design, in the devices and in the protocols of networks. In this work, we first formulate a more precise definition of the "green" attribute. We furthermore identify a few paradigms that are the key enablers of energy-aware networking research. We then overview the current state of the art and provide a taxonomy of the relevant work, with a special focus on wired networking. At a high level, we identify four branches of green networking research that stem from different observations on the root causes of energy waste, namely (i) Adaptive Link Rate, (ii) Interface proxying, (iii) Energy-aware infrastructures and (iv) Energy-aware applications. In this work, we do not only explore specific proposals pertaining to each of the above branches, but also offer a perspective for research.Comment: Index Terms: Green Networking; Wired Networks; Adaptive Link Rate; Interface Proxying; Energy-aware Infrastructures; Energy-aware Applications. 18 pages, 6 figures, 2 table
    • …
    corecore