49 research outputs found

    Task swapping networks in distributed systems

    Full text link
    In this paper we propose task swapping networks for task reassignments by using task swappings in distributed systems. Some classes of task reassignments are achieved by using iterative local task swappings between software agents in distributed systems. We use group-theoretic methods to find a minimum-length sequence of adjacent task swappings needed from a source task assignment to a target task assignment in a task swapping network of several well-known topologies.Comment: This is a preprint of a paper whose final and definite form is published in: Int. J. Comput. Math. 90 (2013), 2221-2243 (DOI: 10.1080/00207160.2013.772985

    Load Balancing in Cloud Computing: A Survey on Popular Techniques and Comparative Analysis

    Get PDF
    Cloud Computing is universally accepted as the most intensifying field in web technologies today. With the increasing popularity of the cloud, popular website2019;s servers are getting overloaded with high request load by users. One of the main challenges in cloud computing is Load Balancing on servers. Load balancing is the procedure of sharing the load between multiple processors in a distributed environment to minimize the turnaround time taken by the servers to cater service requests and make better utilization of the available resources. It greatly helps in scenarios where there is misbalance of workload on the servers as some machines may get heavily loaded while others remain under-loaded or idle. Load balancing methods make sure that every VM or server in the network holds workload equilibrium and load as per their capacity at any instance of time. Static and Dynamic load balancing are main techniques for balancing load on servers. This paper presents a brief discussion on different load balancing schemes and comparison between prime techniques

    Heuristics for Client Assignment and Load Balancing Problems in Online Games

    Get PDF
    Massively multiplayer online games (MMOGs) have been very popular over the past decade. The infrastructure necessary to support a large number of players simultaneously playing these games raises interesting problems to solve. Since the computations involved in solving those problems need to be done while the game is being played, they should not be so expensive that they cause any noticeable slowdown, as this would lead to a poor player perception of the game. Many of the problems in MMOGs are NP-Hard or NP-Complete, therefore we must develop heuristics for those problems without negatively affecting the player experience as a result of excessive computation. In this dissertation, we focus on a few of the problems encountered in MMOGs – the Client Assignment Problem (CAP) and both centralized and distributed load balancing – and develop heuristics for each. For the CAP we investigate how best to assign players to servers while meeting several conditions for satisfactory play, while in load balancing we investigate how best to distribute load among game servers subject to several criteria. In particular, we develop three heuristics - a heuristic for a variant of the CAP called Offline CAP-Z, a heuristic for centralized load balancing called BreakpointLB, and a heuristic for distributed load balancing called PLGR. We develop a simulator to simulate the operations of an MMOG and implement our heuristics to measure performance against adapted heuristics from the literature. We find that in many cases we are able to produce better results than those adapted heuristics, showing promise for implementation into production environments. Further, we believe that these ideas could also be easily adapted to the numerous other problems to solve in MMOGs, and they merit further consideration and augmentation for future research

    Load Balancing Scientific Applications

    Get PDF
    The largest supercomputers have millions of independent processors, and concurrency levels are rapidly increasing. For ideal efficiency, developers of the simulations that run on these machines must ensure that computational work is evenly balanced among processors. Assigning work evenly is challenging because many large modern parallel codes simulate behavior of physical systems that evolve over time, and their workloads change over time. Furthermore, the cost of imbalanced load increases with scale because most large-scale scientific simulations today use a Single Program Multiple Data (SPMD) parallel programming model, and an increasing number of processors will wait for the slowest one at the synchronization points. To address load imbalance, many large-scale parallel applications use dynamic load balance algorithms to redistribute work evenly. The research objective of this dissertation is to develop methods to decide when and how to load balance the application, and to balance it effectively and affordably. We measure and evaluate the computational load of the application, and develop strategies to decide when and how to correct the imbalance. Depending on the simulation, a fast, local load balance algorithm may be suitable, or a more sophisticated and expensive algorithm may be required. We developed a model for comparison of load balance algorithms for a specific state of the simulation that enables the selection of a balancing algorithm that will minimize overall runtime. Dynamic load balancing of parallel applications becomes more critical at scale, while also being expensive. To make the load balance correction affordable at scale, we propose a lazy load balancing strategy that evaluates the imbalance and computes the new assignment of work to processes asynchronously to the main application computation. We decouple the load balance algorithm from the application and run it on potentially fewer, separate processors. In this Multiple Program Multiple Data (MPMD) configuration, the load balance algorithm can execute concurrently with the application and with higher parallel efficiency than if it were run on the same processors as the simulation. Work is reassigned lazily as directions become available, and the application need not wait for the load balance algorithm to complete. We show that we can save resources by running a load balance algorithm at higher parallel efficiency on a smaller number of processors. Using our framework, we explore the trade-offs of load balancing configurations and demonstrate a performance improvement of up to 46%

    Production Scheduling

    Get PDF
    Generally speaking, scheduling is the procedure of mapping a set of tasks or jobs (studied objects) to a set of target resources efficiently. More specifically, as a part of a larger planning and scheduling process, production scheduling is essential for the proper functioning of a manufacturing enterprise. This book presents ten chapters divided into five sections. Section 1 discusses rescheduling strategies, policies, and methods for production scheduling. Section 2 presents two chapters about flow shop scheduling. Section 3 describes heuristic and metaheuristic methods for treating the scheduling problem in an efficient manner. In addition, two test cases are presented in Section 4. The first uses simulation, while the second shows a real implementation of a production scheduling system. Finally, Section 5 presents some modeling strategies for building production scheduling systems. This book will be of interest to those working in the decision-making branches of production, in various operational research areas, as well as computational methods design. People from a diverse background ranging from academia and research to those working in industry, can take advantage of this volume

    Data Replication and Its Alignment with Fault Management in the Cloud Environment

    Get PDF
    Nowadays, the exponential data growth becomes one of the major challenges all over the world. It may cause a series of negative impacts such as network overloading, high system complexity, and inadequate data security, etc. Cloud computing is developed to construct a novel paradigm to alleviate massive data processing challenges with its on-demand services and distributed architecture. Data replication has been proposed to strategically distribute the data access load to multiple cloud data centres by creating multiple data copies at multiple cloud data centres. A replica-applied cloud environment not only achieves a decrease in response time, an increase in data availability, and more balanced resource load but also protects the cloud environment against the upcoming faults. The reactive fault tolerance strategy is also required to handle the faults when the faults already occurred. As a result, the data replication strategies should be aligned with the reactive fault tolerance strategies to achieve a complete management chain in the cloud environment. In this thesis, a data replication and fault management framework is proposed to establish a decentralised overarching management to the cloud environment. Three data replication strategies are firstly proposed based on this framework. A replica creation strategy is proposed to reduce the total cost by jointly considering the data dependency and the access frequency in the replica creation decision making process. Besides, a cloud map oriented and cost efficiency driven replica creation strategy is proposed to achieve the optimal cost reduction per replica in the cloud environment. The local data relationship and the remote data relationship are further analysed by creating two novel data dependency types, Within-DataCentre Data Dependency and Between-DataCentre Data Dependency, according to the data location. Furthermore, a network performance based replica selection strategy is proposed to avoid potential network overloading problems and to increase the number of concurrent-running instances at the same time
    corecore