136 research outputs found

    Tasks scheduling technique using league championship algorithm for makespan minimization in IaaS cloud

    Get PDF
    Makespan minimization in tasks scheduling of infrastructure as a service (IaaS) cloud is an NP-hard problem. A number of techniques had been used in the past to optimize the makespan time of scheduled tasks in IaaS cloud, which is propotional to the execution cost billed to customers. In this paper, we proposed a League Championship Algorithm (LCA) based makespan time minimization scheduling technique in IaaS cloud. The LCA is a sports-inspired population based algorithmic framework for global optimization over a continuous search space. Three other existing algorithms that is, First Come First Served (FCFS), Last Job First (LJF) and Best Effort First (BEF) were used to evaluate the performance of the proposed algorithm. All algorithms under consideration assumed to be non-preemptive. The results obtained shows that, the LCA scheduling technique perform moderately better than the other algorithms in minimizing the makespan time of scheduled tasks in IaaS cloud

    A Survey of League Championship Algorithm: Prospects and Challenges

    Full text link
    The League Championship Algorithm (LCA) is sport-inspired optimization algorithm that was introduced by Ali Husseinzadeh Kashan in the year 2009. It has since drawn enormous interest among the researchers because of its potential efficiency in solving many optimization problems and real-world applications. The LCA has also shown great potentials in solving non-deterministic polynomial time (NP-complete) problems. This survey presents a brief synopsis of the LCA literatures in peer-reviewed journals, conferences and book chapters. These research articles are then categorized according to indexing in the major academic databases (Web of Science, Scopus, IEEE Xplore and the Google Scholar). The analysis was also done to explore the prospects and the challenges of the algorithm and its acceptability among researchers. This systematic categorization can be used as a basis for future studies.Comment: 10 pages, 2 figures, 2 tables, Indian Journal of Science and Technology, 201

    Fault aware task scheduling in cloud using min-min and DBSCAN

    Get PDF
    Cloud computing leverages computing resources by managing these resources globally in a more efficient manner as compared to individual resource services. It requires us to deliver the resources in a heterogeneous environment and also in a highly dynamic nature. Hence, there is always a risk of resource allocation failure that can maximize the delay in task execution. Such adverse impact in the cloud environment also raises questions on quality of service (QoS). Resource management for cloud application and service have bigger challenges and many researchers have proposed several solutions but there is room for improvement. Clustering the resources clustering and mapping them according to task can also be an option to deal with such task failure or mismanaged resource allocation. Density-based spatial clustering of applications with noise (DBSCAN) is a stochastic approach-based algorithm which has the capability to cluster the resources in a cloud environment. The proposed algorithm considers high execution enabled powerful data centers with least fault probability during resource allocation which reduces the probability of fault and increases the tolerance. The simulation is cone using CloudsSim 5.0 tool kit. The results show 25% average improve in execution time, 6.5% improvement in number of task completed and 3.48% improvement in count of task failed as compared to ACO, PSO, BB-BC (Bib = g bang Big Crunch) and WHO(Whale optimization algorithm)

    Performance comparison of heuristic algorithms for task scheduling in IaaS cloud computing environment

    Get PDF
    Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Maxmin, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing

    Development of a Hybrid Algorithm for efficient Task Scheduling in Cloud Computing environment using Artificial Intelligence

    Get PDF
    Cloud computing is developing as a platform for next generation systems where users can pay as they use facilities of cloud computing like any other utilities. Cloud environment involves a set of virtual machines, which share the same computation facility and storage. Due to rapid rise in demand for cloud computing services several algorithms are being developed and experimented by the researchers in order to enhance the task scheduling process of the machines thereby offering optimal solution to the users by which the users can process the maximum number of tasks through minimal utilization of the resources. Task scheduling denotes a set of policies to regulate the task processed by a system. Virtual machine scheduling is essential for effective operations in distributed environment. The aim of this paper is to achieve efficient task scheduling of virtual machines, this study proposes a hybrid algorithm through integrating two prominent heuristic algorithms namely the BAT Algorithm and the Ant Colony Optimization (ACO) algorithm in order to optimize the virtual machine scheduling process. The performance evaluation of the three algorithms (BAT, ACO and Hybrid) reveal that the hybrid algorithm performs better when compared with that of the other two algorithms

    SHARING WITH LIVE MIGRATION ENERGY OPTIMIZATION TASK SCHEDULER FOR CLOUD COMPUTING DATACENTRES

    Get PDF
    The use of cloud computing is expanding, and it is becoming the driver for innovation in all companies to serve their customers around the world. A big attention was drawn to the huge energy that was consumed within those datacentres recently neglecting the energy consumption in the rest of the cloud components. Therefore, the energy consumption should be reduced to minimize performance losses, achieve the target battery lifetime, satisfy performance requirements, minimize power consumption, minimize the CO2 emissions, maximize the profit, and maximize resource utilization. Reducing power consumption in the cloud computing datacentres can be achieved by many ways such as managing or utilizing the resources, controlling redundancy, relocating datacentres, improvement of applications or dynamic voltage and frequency scaling. One of the most efficient ways to reduce power is to use a scheduling technique that will find the best task execution order based on the users demands and with the minimum execution time and cloud resources. It is quite a challenge in cloud environment to design an effective and an efficient task scheduling technique which is done based on the user requirements. The scheduling process is not an easy task because within the datacentre there is dissimilar hardware with different capacities and, to improve the resource utilization, an efficient scheduling algorithm must be applied on the incoming tasks to achieve efficient computing resource allocating and power optimization. The scheduler must maintain the balance between the Quality of Service and fairness among the jobs so that the efficiency may be increased. The aim of this project is to propose a novel method for optimizing energy usage in cloud computing environments that satisfy the Quality of Service (QoS) and the regulations of the Service Level Agreement (SLA). Applying a power- and resource-optimised scheduling algorithm will assist to control and improve the process of mapping between the datacentre servers and the incoming tasks and achieve the optimal deployment of the data centre resources to achieve good computing efficiency, network load minimization and reducing the energy consumption in the datacentre. This thesis explores cloud computing energy aware datacentre structures with diverse scheduling heuristics and propose a novel job scheduling technique with sharing and live migration based on file locality (SLM) aiming to maximize efficiency and save power consumed in the datacentre due to bandwidth usage utilization, minimizing the processing time and the system total make span. The propose SLM energy efficient scheduling strategy have four basic algorithms: 1) Job Classifier, 2) SLM job scheduler, 3) Dual fold VM virtualization and 4) VM threshold margins and consolidation. The SLM job classifier worked on categorising the incoming set of user requests to the datacentre in to two different queues based on these requests type and the source file needed to process them. The processing time of each job fluctuate based on the job type and the number of instructions for each job. The second algorithm, which is the SLM scheduler algorithm, dispatch jobs from both queues according to job arrival time and control the allocation process to the most appropriate and available VM based on job similarity according to a predefined synchronized job characteristic table (SJC). The SLM scheduler uses a replicated host’s infrastructure to save the wasted idle hosts energy by maximizing the basic host’s utilization as long as the system can deal with workflow while setting replicated hosts on off mode. The third SLM algorithm, the dual fold VM algorithm, divide the active VMs in to a top and low level slots to allocate similar jobs concurrently which maximize the host utilization at high workload and reduce the total make span. The VM threshold margins and consolidation algorithm set an upper and lower threshold margin as a trigger for VMs consolidation and load balancing process among running VMs, and deploy a continuous provisioning of overload and underutilize VMs detection scheme to maintain and control the system workload balance. The consolidation and load balancing is achieved by performing a series of dynamic live migrations which provides auto-scaling for the servers with in the datacentres. This thesis begins with cloud computing overview then preview the conceptual cloud resources management strategies with classification of scheduling heuristics. Following this, a Competitive analysis of energy efficient scheduling algorithms and related work is presented. The novel SLM algorithm is proposed and evaluated using the CloudSim toolkit under number of scenarios, then the result compared to Particle Swarm Optimization algorithm (PSO) and Ant Colony Algorithm (ACO) shows a significant improvement in the energy usage readings levels and total make span time which is the total time needed to finish processing all the tasks

    A Task Scheduling Algorithm with Improved Makespan Based on Prediction of Tasks Computation Time algorithm for Cloud Computing

    Get PDF
    Cloud computing is extensively used in a variety of applications and domains, however task and resource scheduling remains an area that requires improvement. Put simply, in a heterogeneous computing system, task scheduling algorithms, which allow the transfer of incoming tasks to machines, are needed to satisfy high performance data mapping requirements. The appropriate mapping between resources and tasks reduces makespan and maximises resource utilisation. In this contribution, we present a novel scheduling algorithm using Directed Acyclic Graph (DAG) based on the Prediction of Tasks Computation Time algorithm (PTCT) to estimate the preeminent scheduling algorithm for prominent cloud data. In addition, the proposed algorithm provides a significant improvement with respect to the makespan and reduces the computation and complexity via employing Principle Components Analysis (PCA) and reducing the Expected Time to Compute (ETC) matrix. Simulation results confirm the superior performance of the algorithm for heterogeneous systems in terms of efficiency, speedup and schedule length ratio, when compared to the state-of-the-art Min-Min, Max-Min, QoS-Guide and MiM-MaM scheduling algorithms

    Energy-efficient Nature-Inspired techniques in Cloud computing datacenters

    Get PDF
    Cloud computing is a systematic delivery of computing resources as services to the consumers via the Internet. Infrastructure as a Service (IaaS) is the capability provided to the consumer by enabling smarter access to the processing, storage, networks, and other fundamental computing resources, where the consumer can deploy and run arbitrary software including operating systems and applications. The resources are sometimes available in the form of Virtual Machines (VMs). Cloud services are provided to the consumers based on the demand, and are billed accordingly. Usually, the VMs run on various datacenters, which comprise of several computing resources consuming lots of energy resulting in hazardous level of carbon emissions into the atmosphere. Several researchers have proposed various energy-efficient methods for reducing the energy consumption in datacenters. One such solutions are the Nature-Inspired algorithms. Towards this end, this paper presents a comprehensive review of the state-of-the-art Nature-Inspired algorithms suggested for solving the energy issues in the Cloud datacenters. A taxonomy is followed focusing on three key dimension in the literature including virtualization, consolidation, and energy-awareness. A qualitative review of each techniques is carried out considering key goal, method, advantages, and limitations. The Nature-Inspired algorithms are compared based on their features to indicate their utilization of resources and their level of energy-efficiency. Finally, potential research directions are identified in energy optimization in data centers. This review enable the researchers and professionals in Cloud computing datacenters in understanding literature evolution towards to exploring better energy-efficient methods for Cloud computing datacenters
    corecore