33,832 research outputs found

    Investigation of Cloud Scheduling Algorithms for Resource Utilization Using CloudSim

    Get PDF
    Compute Cloud comprises a distributed set of High-Performance Computing (HPC) machines to stipulate on-demand computing services to remote users over the internet. Clouds are capable enough to provide an optimal solution to address the ever-increasing computation and storage demands of large scientific HPC applications. To attain good computing performances, mapping of Cloud jobs to the compute resources is a very crucial process. Currently we can say that several efficient Cloud scheduling heuristics are available, however, selecting an appropriate scheduler for the given environment (i.e., jobs and machines heterogeneity) and scheduling objectives (such as minimized makespan, higher throughput, increased resource utilization, load balanced mapping, etc.) is still a difficult task. In this paper, we consider ten important scheduling heuristics (i.e., opportunistic load balancing algorithm, proactive simulation-based scheduling and load balancing, proactive simulation-based scheduling and enhanced load balancing, minimum completion time, Min-Min, load balance improved Min-Min, Max-Min, resource-aware scheduling algorithm, task-aware scheduling algorithm, and Sufferage) to perform an extensive empirical study to insight the scheduling mechanisms and the attainment of the major scheduling objectives. This study assumes that the Cloud job pool consists of a collection of independent and compute-intensive tasks that are statically scheduled to minimize the total execution time of a workload. The experiments are performed using two synthetic and one benchmark GoCJ workloads on a renowned Cloud simulator CloudSim. This empirical study presents a detailed analysis and insights into the circumstances requiring a load balanced scheduling mechanism to improve overall execution performance in terms of makespan, throughput, and resource utilization. The outcomes have revealed that the Sufferage and task-aware scheduling algorithm produce minimum makespan for the Cloud jobs. However, these two scheduling heuristics are not efficient enough to exploit the full computing capabilities of Cloud virtual machines

    A Machine Assignment Mechanism for Compile-Time List-Scheduling Heuristics

    Get PDF
    Finding an optimal solution for a scheduling problem is NP-complete. Therefore, it is necessary to use heuristics to find a good schedule rather than evaluating all possible schedules. List scheduling is generally accepted as an attractive approach, since it combines low complexity with good results. List scheduling consists of two phases: a task prioritization phase where a certain priority is computed and assigned to each task, and a machine assignment phase where each task (in order of its priority) is assigned a machine that minimizes a suitable cost function. This paper presents a machine assignment mechanism that can be used with any list-scheduling algorithm. The mechanism is called Reverse Duplicator Mechanism and outperforms the current mechanisms

    Energy-aware scheduling under reliability and makespan constraints

    Get PDF
    International audienceWe consider a task graph mapped on a set of homogeneous processors. We aim at minimizing the energy consumption while enforcing two constraints: a prescribed bound on the execution time (or makespan), and a reliability threshold. Dynamic voltage and frequency scaling (DVFS) is an approach frequently used to reduce the energy consumption of a schedule, but slowing down the execution of a task to save energy is decreasing the reliability of the execution. In this work, to improve the reliability of a schedule while reducing the energy consumption, we allow for the re-execution of some tasks. We assess the complexity of the tri-criteria scheduling problem (makespan, reliability, energy) of deciding which task to re-execute, and at which speed each execution of a task should be done, with two different speed models: either processors can have arbitrary speeds (continuous model), or a processor can run at a finite number of different speeds and change its speed during a computation (VDD model). We propose several novel tri-criteria scheduling heuristics under the continuous speed model, and we evaluate them through a set of simulations. The two best heuristics turn out to be very efficient and complementary

    Static vs. Dynamic List-Scheduling Performance Comparison

    Get PDF
    The problem of efficient task scheduling is one of the most important and most difficult issues in homogeneous computing environments. Finding an optimal solution for a scheduling problem is NP-complete. Therefore, it is necessary to have heuristics to find a reasonably good schedule rather than evaluate all possible schedules. List-scheduling is generally accepted as an attractive approach, since it pairs low complexity with good results. List-scheduling algorithms schedule tasks in order of priority. This priority can be computed either statically (before scheduling) or dynamically (during scheduling). This paper presents the characteristics of the two main static and the two main dynamic list-scheduling algorithms. It also compares their performance in dealing with random generated graphs with various characteristics

    A maximal chain approach for scheduling tasks in a multiprocessor system.

    Get PDF
    The scheduling problem has been an interesting problem for quite some time. But recently, in the era of parallel and distributed computing it has seen increased activity and many researchers have focused their attention on the scheduling problem once again. Task scheduling is one of the most challenging problems in parallel and distributed computing. It is known to be NP-complete in its general form as well as several restricted cases. Researchers have studied restricted forms of the problem by constraining either the task graph representing the parallel tasks or the computer model. For example, the 2- processor problem has a polynomial-time algorithm. In an attempt to solve the problem in the general case, a number of heuristics have been developed. These heuristics do not guarantee an optimal solution to the problem, but they attempt to find near-optimal solutions most of the time. In this thesis, we study the scheduling problem for a fixed number of processors m. In the proposed work, we are suggesting generating the maximal chain and reducing the problem to (m-1) processors until we apply the two-processor scheduling algorithm on the remaining tasks. This way we can reduce an m-processor problem to a (m-1 )­processor problem. We are trying a new heuristic approach, which tries to reduce the problem to a 2-processor problem; from then on it is just a matter of merging the maximal chain and the derived (m-1) processor schedule. The motivation for reducing it to a 2-processor problem is because there are well known polynomial algorithms to solve this problem. The two-processor algorithm that we will be using is one of the famous algorithms by Coffman and Graham, Sethi, and Gabow. The proposed heuristic will be compared with other well-known heuristics such as List scheduling heuristics. A user-friendly Graphical User Interface will be developed to simplify the use of the developed algorithm. The GUI will allow the user to create a task graph by plotting the nodes and the edges and then there will be various menu items, which will help in generating the labels, maximal chains and schedules for the plotted graph. The User will be able to save the plotted graph and functionality will be provided to copy, cut and paste entire graphs and portions of the graph

    Hybrid scheduling algorithms in cloud computing: a review

    Get PDF
    Cloud computing is one of the emerging fields in computer science due to its several advancements like on-demand processing, resource sharing, and pay per use. There are several cloud computing issues like security, quality of service (QoS) management, data center energy consumption, and scaling. Scheduling is one of the several challenging problems in cloud computing, where several tasks need to be assigned to resources to optimize the quality of service parameters. Scheduling is a well-known NP-hard problem in cloud computing. This will require a suitable scheduling algorithm. Several heuristics and meta-heuristics algorithms were proposed for scheduling the user's task to the resources available in cloud computing in an optimal way. Hybrid scheduling algorithms have become popular in cloud computing. In this paper, we reviewed the hybrid algorithms, which are the combinations of two or more algorithms, used for scheduling in cloud computing. The basic idea behind the hybridization of the algorithm is to take useful features of the used algorithms. This article also classifies the hybrid algorithms and analyzes their objectives, quality of service (QoS) parameters, and future directions for hybrid scheduling algorithms

    New Dynamic Heuristics in the Client-Agent-Server Model

    Get PDF
    Colloque avec actes et comité de lecture. internationale.International audienceMCT is a widely used heuristic for scheduling tasks onto grid platforms. However, when dealing with many tasks, MCT tends to dramatically delay already mapped task completion time, while scheduling a new task. In this paper we propose heuristics based on two features: the historical trace manager that simulates the environment and the perturbation that defines the impact a new allocated task has on already mapped tasks. Our simulations and experiments on a real environment show that the proposed heuristics outperform MCT

    A Novel Heuristic for a Class of Independent Tasks in Computational Grids

    Get PDF
    Scheduling is an essential layer in grid environment. Now-a-days, the computational grids are the important platform for job scheduling. The performance of the computational grids can be improved using an ecient scheduling heuristic. A user submits a job to grid resource broker. The broker is responsible for dividing a job into a number of tasks. It maps the task and the resource to nd a perfect match. The main goal is to minimize the processing time and maximize the resource utilization. Mode of scheduling plays the key role in Grid Scheduling. In Grid, mode of scheduling is of two types: immediate and batch mode. Immediate mode takes one after another task in a serial sequence. But, batch mode takes in a random sequence. Task assignment is mainly based on the mode selection. Task may be assigned to the resource in a batch or as soon as it arrives. In this thesis, we have introduced three immediate mode heuristics such as First-DualMake, Best- DualMake and Worst-DualMake (dened as X-DualMake) and a new mode of heuristic called as intermediate mode (or Multi- batch mode). In our immediate mode scheduling heuristics, jobs are scheduled based on resource idle time. Intermediate mode considers random arrival of task in a multi-batch sequence. Alternatively, arrival of tasks are unknown in this mode. Here, we have taken a range of task arrival for simplicity. This mode is introduced to be a part of the real life aspects. The eight immediate mode heuristics are simulated and the experimental results are discussed. The two existing approaches: Min-Min and Max-Min are experimented with intermediate mode scheduling. We have taken two performance measures: makespan and resource utilization to evaluate the performance

    Scheduling of real time embedded systems for resource and energy minimization by voltage scaling

    Full text link
    The aspects of real-time embedded computing are explored with the focus on novel real-time scheduling policies, which would be appropriate for low-power devices. To consider real-time deadlines with pre-emptive scheduling policies will require the investigation of intelligent scheduling heuristics. These aspects for various other RTES models like Multiple processor system, Dynamic Voltage Scaling and Dynamic scheduling are the focus of this thesis. Deadline based scheduling of task graphs representative of real time systems is performed on a multiprocessor system; A set of aperiodic, dependent tasks in the form of a task graph are taken as the input and all the required task parameters are calculated. All the tasks are then partitioned into two or more clusters allowing them to be run at different voltages. Each cluster, thus voltage scaled results in the overall minimization of the power utilized by the system. With the mapping of each task to a particular voltage done, the tasks are scheduled on a multiprocessor system consisting of processors that can run at different voltages and frequencies, in such a way that all the timing constraints are satisfied
    corecore