33,598 research outputs found

    Scalable dimensioning of resilient Lambda Grids

    Get PDF
    This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier’s archiving and manuscript policies are encouraged to visit

    Scalable algorithms for QoS-aware virtual network mapping for cloud services

    Get PDF
    Both business and consumer applications increasingly depend on cloud solutions. Yet, many are still reluctant to move to cloud-based solutions, mainly due to concerns of service quality and reliability. Since cloud platforms depend both on IT resources (located in data centers, DCs) and network infrastructure connecting to it, both QoS and resilience should be offered with end-to-end guarantees up to and including the server resources. The latter currently is largely impeded by the fact that the network and cloud DC domains are typically operated by disjoint entities. Network virtualization, together with combined control of network and IT resources can solve that problem. Here, we formally state the combined network and IT provisioning problem for a set of virtual networks, incorporating resilience as well as QoS in physical and virtual layers. We provide a scalable column generation model, to address real world network sizes. We analyze the latter in extensive case studies, to answer the question at which layer to provision QoS and resilience in virtual networks for cloud services

    Combined Intra- and Inter-domain Traffic Engineering using Hot-Potato Aware Link Weights Optimization

    Full text link
    A well-known approach to intradomain traffic engineering consists in finding the set of link weights that minimizes a network-wide objective function for a given intradomain traffic matrix. This approach is inadequate because it ignores a potential impact on interdomain routing. Indeed, the resulting set of link weights may trigger BGP to change the BGP next hop for some destination prefixes, to enforce hot-potato routing policies. In turn, this results in changes in the intradomain traffic matrix that have not been anticipated by the link weights optimizer, possibly leading to degraded network performance. We propose a BGP-aware link weights optimization method that takes these effects into account, and even turns them into an advantage. This method uses the interdomain traffic matrix and other available BGP data, to extend the intradomain topology with external virtual nodes and links, on which all the well-tuned heuristics of a classical link weights optimizer can be applied. A key innovative asset of our method is its ability to also optimize the traffic on the interdomain peering links. We show, using an operational network as a case study, that our approach does so efficiently at almost no extra computational cost.Comment: 12 pages, Short version to be published in ACM SIGMETRICS 2008, International Conference on Measurement and Modeling of Computer Systems, June 2-6, 2008, Annapolis, Maryland, US

    A New Method for Assessing the Resiliency of Large, Complex Networks

    Get PDF
    Designing resilient and reliable networks is a principle concern of planners and private firms. Traffic congestion whether recurring or as the result of some aperiodic event is extremely costly. This paper describes an alternative process and a model for analyzing the resiliency of networks that address some of the shortcomings of more traditional approaches – e.g., the four-step modeling process used in transportation planning. It should be noted that the authors do not view this as a replacement to current approaches but rather as a complementary tool designed to augment analysis capabilities. The process that is described in this paper for analyzing the resiliency of a network involves at least three steps: 1. assessment or identification of important nodes and links according to different criteria 2. verification of critical nodes and links based on failure simulations and 3. consequence. Raster analysis, graph-theory principles and GIS are used to develop a model for carrying out each of these steps. The methods are demonstrated using two, large interdependent networks for a metropolitan area in the United States.

    Estimating Dynamic Traffic Matrices by using Viable Routing Changes

    Get PDF
    Abstract: In this paper we propose a new approach for dealing with the ill-posed nature of traffic matrix estimation. We present three solution enhancers: an algorithm for deliberately changing link weights to obtain additional information that can make the underlying linear system full rank; a cyclo-stationary model to capture both long-term and short-term traffic variability, and a method for estimating the variance of origin-destination (OD) flows. We show how these three elements can be combined into a comprehensive traffic matrix estimation procedure that dramatically reduces the errors compared to existing methods. We demonstrate that our variance estimates can be used to identify the elephant OD flows, and we thus propose a variant of our algorithm that addresses the problem of estimating only the heavy flows in a traffic matrix. One of our key findings is that by focusing only on heavy flows, we can simplify the measurement and estimation procedure so as to render it more practical. Although there is a tradeoff between practicality and accuracy, we find that increasing the rank is so helpful that we can nevertheless keep the average errors consistently below the 10% carrier target error rate. We validate the effectiveness of our methodology and the intuition behind it using commercial traffic matrix data from Sprint's Tier-1 backbon
    corecore