146,597 research outputs found

    A dynamic approach for load balancing

    Get PDF
    International audienceWe study how to reach a Nash equilibrium in a load balanc- ing scenario where each task is managed by a selfish agent and attempts to migrate to a machine which will minimize its cost. The cost of a machine is a function of the load on it. The load on a machine is the sum of the weights of the jobs running on it. We prove that Nash equilibria can be learned on that games with incomplete information, using some Lyapunov techniques

    Joint Dynamic Radio Resource Allocation and Mobility Load Balancing in 3GPP LTE Multi-Cell Network

    Get PDF
    Load imbalance, together with inefficient utilization of system resource, constitute major factors responsible for poor overall performance in Long Term Evolution (LTE) network. In this paper, a novel scheme of joint dynamic resource allocation and load balancing is proposed to achieve a balanced performance improvement in 3rd Generation Partnership Project (3GPP) LTE Self-Organizing Networks (SON). The new method which aims at maximizing network resource efficiency subject to inter-cell interference and intra-cell resource constraints is implemented in two steps. In the first step, an efficient resource allocation, including user scheduling and power assignment, is conducted in a distributed manner to serve as many users in the whole network as possible. In the second step, based on the resource allocation scheme, the optimization objective namely network resource efficiency can be calculated and load balancing is implemented by switching the user that can maximize the objective function. Lagrange Multipliers method and heuristic algorithm are used to resolve the formulated optimization problem. Simulation results show that our algorithm achieves better performance in terms of user throughput, fairness, load balancing index and unsatisfied user number compared with the traditional approach which takes resource allocation and load balancing into account, respectively

    Dynamic Weighted Round Robin Approach in Software-Defined Networks Using Pox Controller

    Get PDF
    Load balancing is important in solving over-load traffic problems in the network. Therefore, it has been among the first appealing applications in Software Defined Networking (SDN) networks. Numerous SDN-based load-balancing approaches have been recommended to enhance the performance of SDN networks. However, network control could be more manageable in large networks with hundreds of switches and routers. The SDN is a unique way of building, controlling, and developing networks to modify this unpleasant situation. The major concept of SDN contains logically centralizing network management in an SDN controller, which manages and observes the behaviour of the network. Numerous load-balancing approaches are known, such as Round Robin (RR), random policy, Weighted randomized policy (WRP), etc. Every load-balancing policy approach has some benefits and detriments. This paper developed an advanced load-balancing algorithm, a dynamic weighted round-robin (DWRR), and ran it on the top of the SDN controller. Then we calculate the result of our proposed load-balancing approach by comparing it with the current round-robin (RR) and weighted round-robin (WRR) approaches. Mininet tool is utilized for the investigation, and the controller utilized as the control plane is named the POX controller

    rDLB: A Novel Approach for Robust Dynamic Load Balancing of Scientific Applications with Parallel Independent Tasks

    Full text link
    Scientific applications often contain large and computationally intensive parallel loops. Dynamic loop self scheduling (DLS) is used to achieve a balanced load execution of such applications on high performance computing (HPC) systems. Large HPC systems are vulnerable to processors or node failures and perturbations in the availability of resources. Most self-scheduling approaches do not consider fault-tolerant scheduling or depend on failure or perturbation detection and react by rescheduling failed tasks. In this work, a robust dynamic load balancing (rDLB) approach is proposed for the robust self scheduling of independent tasks. The proposed approach is proactive and does not depend on failure or perturbation detection. The theoretical analysis of the proposed approach shows that it is linearly scalable and its cost decrease quadratically by increasing the system size. rDLB is integrated into an MPI DLS library to evaluate its performance experimentally with two computationally intensive scientific applications. Results show that rDLB enables the tolerance of up to (P minus one) processor failures, where P is the number of processors executing an application. In the presence of perturbations, rDLB boosted the robustness of DLS techniques up to 30 times and decreased application execution time up to 7 times compared to their counterparts without rDLB

    Load balancing of communication channels with the use of routing protocols

    Get PDF
    In the article the authors propose a method for load-balancing of network resources forthe case which uses a routing protocols. In the first part of the article the authors present currentlyused algorithms for load balancing and possibilities of their modification. Through the introductionof additional hardware components for each node: the agent and the probe; it is possible to monitorand control the current system performance. The whole analyzed network is treated as a complexsystem. This allows to eliminate overloading of route nodes (through ongoing analysis of the optimaloperating point for a given node). Load balancing can be achieved using a modified mechanism ofECMP. The proposed approach allows for dynamic adjustment of load to network resources and thuseffectively to balance network traffic

    Dynamic distributed clustering in wireless sensor networks via Voronoi tessellation control

    Get PDF
    This paper presents two dynamic and distributed clustering algorithms for Wireless Sensor Networks (WSNs). Clustering approaches are used in WSNs to improve the network lifetime and scalability by balancing the workload among the clusters. Each cluster is managed by a cluster head (CH) node. The first algorithm requires the CH nodes to be mobile: by dynamically varying the CH node positions, the algorithm is proved to converge to a specific partition of the mission area, the generalised Voronoi tessellation, in which the loads of the CH nodes are balanced. Conversely, if the CH nodes are fixed, a weighted Voronoi clustering approach is proposed with the same load-balancing objective: a reinforcement learning approach is used to dynamically vary the mission space partition by controlling the weights of the Voronoi regions. Numerical simulations are provided to validate the approaches

    Design of robust scheduling methodologies for high performance computing

    Get PDF
    Scientific applications are often large, complex, computationally-intensive, and irregular. Loops are often an abundant source of parallelism in scientific applications. Due to the ever-increasing computational needs of scientific applications, high performance computing (HPC) systems have become larger and more complex, offering increased parallelism at multiple hardware levels. Load imbalance, caused by irregular computational load per task and unpredictable computing system characteristics (system variability), often degrades the performance of applications. Besides, perturbations, such as reduced computing power, network latency availability, or failures, can severely impact the performance of the applications. System variability and perturbations are only expected to increase in future extreme-scale computing systems. Extrapolating the current failure rate to Exascale would result in a failure every 20 minutes. Such failure rate and perturbations would render the computing systems unusable. This doctoral thesis improves the performance of computationally-intensive scientific applications on HPC systems via robust load balancing. Robust scheduling ensures and maintains improved load balanced execution under unpredictable application and system characteristics. A number of dynamic loop self-scheduling (DLS) techniques have been introduced and successfully used in scientific applications between the 1980s and 2000s. These DLS techniques are not fault-tolerant as they were originally introduced. In this thesis, we identify three major research questions to achieve robust scheduling (1) How to ensure that the DLS techniques employed in scientific applications today adhere to their original design goals and specifications? (2) How to select a DLS technique that will achieve improved performance under perturbations? (3) How to tolerate perturbations during execution and maintain a load balanced execution on HPC systems? To answer the first question, we reproduced the original experiments that introduced the DLS techniques to verify their present implementation. Simulation is used to reproduce experiments on systems from the past. Realistic simulation induces a similar analysis and conclusions to the analysis of the native results. To this end, we devised an approach for bridging the native and simulative executions of parallel applications on HPC systems. This simulation approach is used to reproduce scheduling experiments on past and present systems to verify the implementation of DLS techniques. Given the multiple levels of parallelism offered by the present HPC systems, we analyzed the load imbalance in scientific applications, from computer vision, astrophysics, and mathematical kernels, at both thread and process levels. This analysis revealed a significant interplay between thread level and process level load balancing. We found that dynamic load balancing at the thread level propagates to the process level and vice versa. However, the best application performance is only achieved by two-level dynamic load balancing. Next, we examined the performance of applications under perturbations. We found that the most robust DLS technique does not deliver the best performance under various perturbations. The most efficient DLS technique changes by changing the application, the system, or perturbations during execution. This signifies the algorithm selection problem in the DLS. We leveraged realistic simulations to address the algorithm selection problem of scheduling under perturbations via a simulation assisted approach (SimAS), which answers the second question. SimAS dynamically selects DLS techniques that improve the performance depending on the application, system, and perturbations during the execution. To answer the third question, we introduced a robust dynamic load balancing (rDLB) approach for the robust self-scheduling of scientific applications under failures (question 3). rDLB proactively reschedules already allocated tasks and requires no detection of perturbations. rDLB tolerates up to P −1 processor failures (P is the number of processors allocated to the application) and boosts the flexibility of applications against nonfatal perturbations, such as reduced availability of resources. This thesis is the first to provide insights into the interplay between thread and process level dynamic load balancing in scientific applications. Verified DLS techniques, SimAS, and rDLB are integrated into an MPI-based dynamic load balancing library (DLS4LB), which supports thirteen DLS techniques, for robust dynamic load balancing of scientific applications on HPC systems. Using the methods devised in this thesis, we improved the performance of scientific applications by up to 21% via two-level dynamic load balancing. Under perturbations, we enhanced their performance by a factor of 7 and their flexibility by a factor of 30. This thesis opens up the horizons into understanding the interplay of load balancing between various levels of software parallelism and lays the ground for robust multilevel scheduling for the upcoming Exascale HPC systems and beyond

    Dynamic load balancing via thread migration

    Get PDF
    Light-weight threads are becoming increasingly useful for parallel processing. This is particularly true for threads running in a distributed memory environment. Light-weight threads can be used to support latency hiding techniques, communication and computation overlap, and functional parallelism. Additionally, dynamic migration of light-weight threads supports both data locality and load balancing. Designing a thread migration mechanism presents some very unique and interesting challenges. One such challenge is maintaining communication between mobile threads. A potentially more difficult challenge involves maintaining the correctness of pointers within mobile threads. Since traditional pointers have no concept of address space, moving threads from processor to processor has a strong impact on the use of pointers. Options for dealing with pointers include restricting their use, adding a layer of software to support pointers referencing non-local data, and binding data to threads such that referenced data is always local to the thread. This dissertation presents the design and implementation of Chant, an efficient light-weight threads package which runs in a distributed memory environment. Chant was designed and implemented as a runtime system using MPI like and Pthreads like calls. Chant supports point-to-point message passing between threads executing in distributed address spaces. We focus on the use of Chant as a framework to support dynamic load balancing based on thread migration. We explore many of the issues which arise when designing and implementing a thread migration mechanism, as well as the issues which arise when considering the use of thread migration as a means for performing dynamic load balancing. This load balancing framework uses both system state information, including communication history, and user input. One of the basic functionalities of this load balancing framework is the ability of the user to customize the load balancing to fit particular classes of problems. This dissertation provides implementation details as well as discussion and justification of design choices. We go on to show that the overhead associated with our approach is within an acceptable range, and that significant performance gains can be achieved through the use of thread migration as a means of performing dynamic load balancing
    • 

    corecore