48,510 research outputs found

    Load Balancing Mechanisms in the Software Defined Networks: A Systematic and Comprehensive Review of the Literature

    Get PDF
    With the expansion of the network and increasing their users, as well as emerging new technologies, such as cloud computing and big data, managing traditional networks is difficult. Therefore, it is necessary to change the traditional network architecture. Lately, to address this issue, a notion named software-defined network (SDN) has been proposed, which makes network management more conformable. Due to limited network resources and to meet the requirements of quality of service, one of the points that must be considered is load balancing issue that serves to distribute data traffic among multiple resources in order to maximize the efficiency and reliability of network resources. Load balancing is established based on the local information of the network in the conventional network. Hence, it is not very precise. However, SDN controllers have a global view of the network and can produce more optimized load balances. Although load balancing mechanisms are important in the SDN, to the best of our knowledge, there exists no precise and systematic review or survey on investigating these issues. Hence, this paper reviews the load balancing mechanisms which have been used in the SDN systematically based on two categories, deterministic and non-deterministic. Also, this paper represents benefits and some weakness regarded of the selected load balancing algorithms and investigates the metrics of their algorithms. In addition, the important challenges of these algorithms have been reviewed, so better load balancing techniques can be applied by the researchers in the future. © 2018 IEEE

    A multi-domain VNE algorithm based on load balancing in the IoT networks

    Get PDF
    The coordinated development of big data, Internet of Things, cloud computing and other technologies has led to an exponential growth in Internet business. However, the traditional Internet architecture gradually shows a rigid phenomenon due to the binding of the network structure and the hardware. In a high-traffic environment, it has been insufficient to meet people’s increasing service quality requirements. Network virtualization is considered to be an effective method to solve the rigidity of the Internet. Among them, virtual network embedding is one of the key problems of network virtualization. Since virtual network mapping is an NP-hard problem, a large number of research has focused on the evolutionary algorithm’s masterpiece genetic algorithm. However, the parameter setting in the traditional method is too dependent on experience, and its low flexibility makes it unable to adapt to increasingly complex network environments. In addition, link-mapping strategies that do not consider load balancing can easily cause link blocking in high-traffic environments. In the IoT environment involving medical, disaster relief, life support and other equipment, network performance and stability are particularly important. Therefore, how to provide a more flexible virtual network mapping service in a heterogeneous network environment with large traffic is an urgent problem. Aiming at this problem, a virtual network mapping strategy based on hybrid genetic algorithm is proposed. This strategy uses a dynamically calculated cross-probability and pheromone based mutation gene selection strategy to improve the flexibility of the algorithm. In addition, a weight update mechanism based on load balancing is introduced to reduce the probability of mapping failure while balancing the load. Simulation results show that the proposed method performs well in a number of performance metrics including mapping average quotation, link load balancing, mapping cost-benefit ratio, acceptance rate and running time.Peer ReviewedPostprint (published version

    Big Data for Traffic Engineering in Software-Defined Networks

    Get PDF
    Software-defined networking overcomes the limitations of traditional networks by splitting the control plane from the data plane. The logic of the network is moved to a component called the controller that manages devices in the data plane. To implement this architecture, it has become the norm to use the OpenFlow (OF) protocol, which defines several counters maintained by network devices. These counters are the starting point for Traffic Engineering (TE) activities. TE monitors several network parameters, including network bandwidth utilization. A great challenge for TE is to collect and generate statistics about bandwidth utilization for monitoring and traffic analysis activities. This becomes even more challenging if fine-grained monitoring is required. Network management tasks such as network provisioning, capacity planning, load balancing, and anomaly detection can benefit from this fine-grained monitoring. Because the counters are updated for every packet that crosses the switch, they must be retrieved in a streaming fashion. This scenario suggests the use of Big Data streaming techniques to collect and process counter values. Therefore, this paper proposes an approach based on a fine-grained Big Data monitoring method to collect and generate traffic statistics using counter values. This research work can significantly leverage TE. The approach can provide a more detailed view of network resource utilization because it can deliver individual and aggregated statistical analyses of bandwidth consumption. Experimental results show the effectiveness of the proposed method

    The Simulation Model Partitioning Problem: an Adaptive Solution Based on Self-Clustering (Extended Version)

    Full text link
    This paper is about partitioning in parallel and distributed simulation. That means decomposing the simulation model into a numberof components and to properly allocate them on the execution units. An adaptive solution based on self-clustering, that considers both communication reduction and computational load-balancing, is proposed. The implementation of the proposed mechanism is tested using a simulation model that is challenging both in terms of structure and dynamicity. Various configurations of the simulation model and the execution environment have been considered. The obtained performance results are analyzed using a reference cost model. The results demonstrate that the proposed approach is promising and that it can reduce the simulation execution time in both parallel and distributed architectures
    • …
    corecore