43,895 research outputs found

    On load balancing via switch migration in software-defined networking

    Get PDF
    Switch-controller assignment is an essential task in multi-controller software-defined networking. Static assignments are not practical because network dynamics are complex and difficult to predetermine. Since network load varies both in space and time, the mapping of switches to controllers should be adaptive to sudden changes in the network. To that end, switch migration plays an important role in maintaining dynamic switch-controller mapping. Migrating switches from overloaded to underloaded controllers brings flexibility and adaptability to the network but, at the same time, deciding which switches should be migrated to which controllers, while maintaining a balanced load in the network, is a challenging task. This work presents a heuristic approach with solution shaking to solve the switch migration problem. Shift and swap moves are incorporated within a search scheme. Every move is evaluated by how much benefititwillgivetoboththeimmigrationandoutmigrationcontrollers.Theexperimentalresultsshowthat theproposedapproachisabletooutweighthestate-of-artapproaches,andimprovetheloadbalancingresults up to≈ 14% in some scenarios when compared to the most recent approach. In addition, the results show that the proposed work is more robust to controller failure than the state-of-art methods.Portuguese Science and Technology Foundation (FCT) - UID/MULTI/00631/2019;info:eu-repo/semantics/publishedVersio

    Datacenter Traffic Control: Understanding Techniques and Trade-offs

    Get PDF
    Datacenters provide cost-effective and flexible access to scalable compute and storage resources necessary for today's cloud computing needs. A typical datacenter is made up of thousands of servers connected with a large network and usually managed by one operator. To provide quality access to the variety of applications and services hosted on datacenters and maximize performance, it deems necessary to use datacenter networks effectively and efficiently. Datacenter traffic is often a mix of several classes with different priorities and requirements. This includes user-generated interactive traffic, traffic with deadlines, and long-running traffic. To this end, custom transport protocols and traffic management techniques have been developed to improve datacenter network performance. In this tutorial paper, we review the general architecture of datacenter networks, various topologies proposed for them, their traffic properties, general traffic control challenges in datacenters and general traffic control objectives. The purpose of this paper is to bring out the important characteristics of traffic control in datacenters and not to survey all existing solutions (as it is virtually impossible due to massive body of existing research). We hope to provide readers with a wide range of options and factors while considering a variety of traffic control mechanisms. We discuss various characteristics of datacenter traffic control including management schemes, transmission control, traffic shaping, prioritization, load balancing, multipathing, and traffic scheduling. Next, we point to several open challenges as well as new and interesting networking paradigms. At the end of this paper, we briefly review inter-datacenter networks that connect geographically dispersed datacenters which have been receiving increasing attention recently and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial

    Energy-aware Load Balancing Policies for the Cloud Ecosystem

    Full text link
    The energy consumption of computer and communication systems does not scale linearly with the workload. A system uses a significant amount of energy even when idle or lightly loaded. A widely reported solution to resource management in large data centers is to concentrate the load on a subset of servers and, whenever possible, switch the rest of the servers to one of the possible sleep states. We propose a reformulation of the traditional concept of load balancing aiming to optimize the energy consumption of a large-scale system: {\it distribute the workload evenly to the smallest set of servers operating at an optimal energy level, while observing QoS constraints, such as the response time.} Our model applies to clustered systems; the model also requires that the demand for system resources to increase at a bounded rate in each reallocation interval. In this paper we report the VM migration costs for application scaling.Comment: 10 Page

    Exploiting Traffic Balancing and Multicast Efficiency in Distributed Video-on-Demand Architectures

    Get PDF
    Distributed Video-on-Demand (DVoD) systems are proposed as a solution to the limited streaming capacity and null scalability of centralized systems. In a previous work, we proposed a fully distributed large-scale VoD architecture, called Double P-Tree, which has shown itself to be a good approach to the design of flexible and scalable DVoD systems. In this paper, we present relevant design aspects related to video mapping and traffic balancing in order to improve Double P-Tree architecture performance. Our simulation results demonstrate that these techniques yield a more efficient system and considerably increase its streaming capacity. The results also show the crucial importance of topology connectivity in improving multicasting performance in DVoD systems. Finally, a comparison among several DVoD architectures was performed using simulation, and the results show that the Double P-Tree architecture incorporating mapping and load balancing policies outperforms similar DVoD architectures.This work was supported by the MCyT-Spain under contract TIC 2001-2592 and partially supported by the Generalitat de Catalunya- Grup de Recerca Consolidat 2001SGR-00218
    corecore