906 research outputs found

    Spare capacity allocation using shared backup path protection for dual link failures

    Get PDF
    This paper extends the spare capacity allocation (SCA) problem from single link failure [1] to dual link failures on mesh-like IP or WDM networks. The SCA problem pre-plans traffic flows with mutually disjoint one working and two backup paths using the shared backup path protection (SBPP) scheme. The aggregated spare provision matrix (SPM) is used to capture the spare capacity sharing for dual link failures. Comparing to a previous work by He and Somani [2], this method has better scalability and flexibility. The SCA problem is formulated in a non-linear integer programming model and partitioned into two sequential linear sub-models: one finds all primary backup paths first, and the other finds all secondary backup paths next. The results on five networks show that the network redundancy using dedicated 1+1+1 is in the range of 313-400%. It drops to 96-181% in 1:1:1 without loss of dual-link resiliency, but with the trade-off of using the complicated share capacity sharing among backup paths. The hybrid 1+1:1 provides intermediate redundancy ratio at 187-310% with a moderate complexity. We also compare the passive/active approaches which consider spare capacity sharing after/during the backup path routing process. The active sharing approaches always achieve lower redundancy values than the passive ones. These reduction percentages are about 12% for 1+1:1 and 25% for 1:1:1 respectively

    Joint dimensioning of server and network infrastructure for resilient optical grids/clouds

    Get PDF
    We address the dimensioning of infrastructure, comprising both network and server resources, for large-scale decentralized distributed systems such as grids or clouds. We design the resulting grid/cloud to be resilient against network link or server failures. To this end, we exploit relocation: Under failure conditions, a grid job or cloud virtual machine may be served at an alternate destination (i.e., different from the one under failure-free conditions). We thus consider grid/cloud requests to have a known origin, but assume a degree of freedom as to where they end up being served, which is the case for grid applications of the bag-of-tasks (BoT) type or hosted virtual machines in the cloud case. We present a generic methodology based on integer linear programming (ILP) that: 1) chooses a given number of sites in a given network topology where to install server infrastructure; and 2) determines the amount of both network and server capacity to cater for both the failure-free scenario and failures of links or nodes. For the latter, we consider either failure-independent (FID) or failure-dependent (FD) recovery. Case studies on European-scale networks show that relocation allows considerable reduction of the total amount of network and server resources, especially in sparse topologies and for higher numbers of server sites. Adopting a failure-dependent backup routing strategy does lead to lower resource dimensions, but only when we adopt relocation (especially for a high number of server sites): Without exploiting relocation, potential savings of FD versus FID are not meaningful

    Optimization in Telecommunication Networks

    Get PDF
    Network design and network synthesis have been the classical optimization problems intelecommunication for a long time. In the recent past, there have been many technologicaldevelopments such as digitization of information, optical networks, internet, and wirelessnetworks. These developments have led to a series of new optimization problems. Thismanuscript gives an overview of the developments in solving both classical and moderntelecom optimization problems.We start with a short historical overview of the technological developments. Then,the classical (still actual) network design and synthesis problems are described with anemphasis on the latest developments on modelling and solving them. Classical results suchas Mengerā€™s disjoint paths theorem, and Ford-Fulkersonā€™s max-flow-min-cut theorem, butalso Gomory-Hu trees and the Okamura-Seymour cut-condition, will be related to themodels described. Finally, we describe recent optimization problems such as routing andwavelength assignment, and grooming in optical networks.operations research and management science;

    Next-generation optical access seamless Evolution: concluding results of the European FP7 project OASE

    Get PDF
    Increasing bandwidth demand drives the need for next-generation optical access (NGOA) networks that can meet future end-user service requirements. This paper gives an overview of NGOA solutions, the enabling optical access network technologies, architecture principles, and related economics and business models. NGOA requirements (including peak and sustainable data rate, reach, cost, node consolidation, and open access) are proposed, and the different solutions are compared against such requirements in different scenarios (in terms of population density and system migration). Unsurprisingly, it is found that different solutions are best suited for different scenarios. The conclusions drawn from such findings allow us to formulate recommendations in terms of technology, strategy, and policy. The paper is based on the main results of the European FP7 OASE Integrated Project that ran between January 1, 2010 and February 28, 2013

    Next-Generation Transport Networks Leveraging Universal Traffic Switching and Flexible Optical Transponders

    Get PDF
    Recent developments in communication technology contributed to the growth of network traffic exponentially. Cost per bit has to necessarily suffer an inverse trend, posing several challenges to network operators. Optical transport networks are no exception to this. On one hand, they have to keep up with the expectations of data speed, volume, and growth at the agreed quality-of-service (QoS), while on the other hand, a steep downward trend of the cost per bit is a matter of concern. Thus, the proper selection of network architecture, technology, resiliency schemes, and traffic handling contributes to the total cost of ownership (TCO). In this context, this chapter looks into the network architectures, including the optical transport network (OTN) switch (both traditional and universal), resiliency schemes (protection and restoration), flexible-rate line interfaces, and an overall strategy of handover in between metro and core networks. A design framework is also described and used to support the case studies reported in this chapter

    Research challenges on energy-efficient networking design

    Get PDF
    The networking research community has started looking into key questions on energy efficiency of communication networks. The European Commission activated under the FP7 the TREND Network of Excellence with the goal of establishing the integration of the EU research community in green networking with a long perspective to consolidate the European leadership in the field. TREND integrates the activities of major European players in networking, including manufacturers, operators, research centers, to quantitatively assess the energy demand of current and future telecom infrastructures, and to design energy-efficient, scalable and sustainable future networks. This paper describes the main results of the TREND research community and concludes with a roadmap describing the next steps for standardization, regulation agencies and research in both academia and industry.The research leading to these results has received funding from the EU 7th Framework Programme (FP7/2007ā€“2013) under Grant Agreement No. 257740 (NoE TREND)

    Design and optimization of optical grids and clouds

    Get PDF

    Survivability aspects of future optical backbone networks

    Get PDF
    In huidige glasvezelnetwerken kan een enkele vezel een gigantische hoeveelheid data dragen, ruwweg het equivalent van 25 miljoen gelijktijdige telefoongesprekken. Hierdoor zullen netwerkstoringen, zoals breuken van een glasvezelkabel, de communicatie van een groot aantal eindgebruikers verstoren. Netwerkoperatoren kiezen er dan ook voor om hun netwerk zo te bouwen dat zulke grote storingen automatisch opgevangen worden. Dit proefschrift spitst zich toe op twee aspecten rond de overleefbaarheid in toekomstige optische netwerken. De eerste doelstelling die beoogd wordt is het tot stand brengen vanrobuuste dataverbindingen over meerdere netwerken. Door voldoende betrouwbare verbindingen tot stand te brengen over een infrastructuur die niet door een enkele entiteit wordt beheerd kan men bv. weredwijd Internettelevisie van hoge kwaliteit aanbieden. De bestudeerde oplossing heeft niet enkel tot doel om deze zeer betrouwbare verbinding te berekenen, maar ook om dit te bewerkstelligen met een minimum aan gebruikte netwerkcapaciteit. De tweede doelstelling was om een antwoord te formuleren om de vraag hoe het toepassen van optische schakelsystemen gebaseerd op herconfigureerbare optische multiplexers een impact heeft op de overleefbaarheid van een optisch netwerk. Bij lagere volumes hebben optisch geschakelde netwerken weinig voordeel van dergelijke gesofistikeerde methoden. Elektronisch geschakelde netwerken vertonen geen afhankelijkheid van het datavolume en hebben altijd baat bij optimalisatie
    • ā€¦
    corecore