262 research outputs found

    Investigation of Cloud Scheduling Algorithms for Resource Utilization Using CloudSim

    Get PDF
    Compute Cloud comprises a distributed set of High-Performance Computing (HPC) machines to stipulate on-demand computing services to remote users over the internet. Clouds are capable enough to provide an optimal solution to address the ever-increasing computation and storage demands of large scientific HPC applications. To attain good computing performances, mapping of Cloud jobs to the compute resources is a very crucial process. Currently we can say that several efficient Cloud scheduling heuristics are available, however, selecting an appropriate scheduler for the given environment (i.e., jobs and machines heterogeneity) and scheduling objectives (such as minimized makespan, higher throughput, increased resource utilization, load balanced mapping, etc.) is still a difficult task. In this paper, we consider ten important scheduling heuristics (i.e., opportunistic load balancing algorithm, proactive simulation-based scheduling and load balancing, proactive simulation-based scheduling and enhanced load balancing, minimum completion time, Min-Min, load balance improved Min-Min, Max-Min, resource-aware scheduling algorithm, task-aware scheduling algorithm, and Sufferage) to perform an extensive empirical study to insight the scheduling mechanisms and the attainment of the major scheduling objectives. This study assumes that the Cloud job pool consists of a collection of independent and compute-intensive tasks that are statically scheduled to minimize the total execution time of a workload. The experiments are performed using two synthetic and one benchmark GoCJ workloads on a renowned Cloud simulator CloudSim. This empirical study presents a detailed analysis and insights into the circumstances requiring a load balanced scheduling mechanism to improve overall execution performance in terms of makespan, throughput, and resource utilization. The outcomes have revealed that the Sufferage and task-aware scheduling algorithm produce minimum makespan for the Cloud jobs. However, these two scheduling heuristics are not efficient enough to exploit the full computing capabilities of Cloud virtual machines

    Classification and Performance Study of Task Scheduling Algorithms in Cloud Computing Environment

    Get PDF
    Cloud computing is becoming very common in recent years and is growing rapidly due to its attractive benefits and features such as resource pooling, accessibility, availability, scalability, reliability, cost saving, security, flexibility, on-demand services, pay-per-use services, use from anywhere, quality of service, resilience, etc. With this rapid growth of cloud computing, there may exist too many users that require services or need to execute their tasks simultaneously by resources provided by service providers. To get these services with the best performance, and minimum cost, response time, makespan, effective use of resources, etc. an intelligent and efficient task scheduling technique is required and considered as one of the main and essential issues in the cloud computing environment. It is necessary for allocating tasks to the proper cloud resources and optimizing the overall system performance. To this end, researchers put huge efforts to develop several classes of scheduling algorithms to be suitable for the various computing environments and to satisfy the needs of the various types of individuals and organizations. This research article provides a classification of proposed scheduling strategies and developed algorithms in cloud computing environment along with the evaluation of their performance. A comparison of the performance of these algorithms with existing ones is also given. Additionally, the future research work in the reviewed articles (if available) is also pointed out. This research work includes a review of 88 task scheduling algorithms in cloud computing environment distributed over the seven scheduling classes suggested in this study. Each article deals with a novel scheduling technique and the performance improvement it introduces compared with previously existing task scheduling algorithms. Keywords: Cloud computing, Task scheduling, Load balancing, Makespan, Energy-aware, Turnaround time, Response time, Cost of task, QoS, Multi-objective. DOI: 10.7176/IKM/12-5-03 Publication date:September 30th 2022

    Energy Efficient Algorithms based on VM Consolidation for Cloud Computing: Comparisons and Evaluations

    Get PDF
    Cloud Computing paradigm has revolutionized IT industry and be able to offer computing as the fifth utility. With the pay-as-you-go model, cloud computing enables to offer the resources dynamically for customers anytime. Drawing the attention from both academia and industry, cloud computing is viewed as one of the backbones of the modern economy. However, the high energy consumption of cloud data centers contributes to high operational costs and carbon emission to the environment. Therefore, Green cloud computing is required to ensure energy efficiency and sustainability, which can be achieved via energy efficient techniques. One of the dominant approaches is to apply energy efficient algorithms to optimize resource usage and energy consumption. Currently, various virtual machine consolidation-based energy efficient algorithms have been proposed to reduce the energy of cloud computing environment. However, most of them are not compared comprehensively under the same scenario, and their performance is not evaluated with the same experimental settings. This makes users hard to select the appropriate algorithm for their objectives. To provide insights for existing energy efficient algorithms and help researchers to choose the most suitable algorithm, in this paper, we compare several state-of-the-art energy efficient algorithms in depth from multiple perspectives, including architecture, modelling and metrics. In addition, we also implement and evaluate these algorithms with the same experimental settings in CloudSim toolkit. The experimental results show the performance comparison of these algorithms with comprehensive results. Finally, detailed discussions of these algorithms are provided

    An In-Depth Empirical Investigation of State-of-the-Art Scheduling Approaches for Cloud Computing

    Get PDF
    Recently, Cloud computing has emerged as one of the widely used platforms to provide compute, storage and analytics services to end-users and organizations on a pay-as-you-use basis, with high agility, availability, scalability, and resiliency. This enables individuals and organizations to have access to a large pool of high processing resources without the need for establishing a high-performance computing (HPC) platform. From the past few years, task scheduling in Cloud computing is reckoned as eminent recourse for researchers. However, task scheduling is considered an NP-hard problem. In this research work, we investigate and empirically compare some of the most prominent state-of-the-art scheduling heuristics in terms of Makespan, Average resource utilization (ARUR), Throughput, and Energy consumption. The comparison is then extended by evaluating the approaches in terms of individual VM level load imbalance. After extensive simulation, the comparative analysis has revealed that Task Aware Scheduling Algorithm (TASA) and Proactive Simulation-based Scheduling and Load Balancing (PSSLB) outperformed as compared to the rest of the approaches and seems to be optimal choice keeping in view the trade-of between the complexities involved and the performance achieved concerning Makespan, Throughput, resource utilization, and Energy consumption

    InterCloud: Utility-Oriented Federation of Cloud Computing Environments for Scaling of Application Services

    Full text link
    Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world. However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels. Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load. To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions. The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments. The proposed InterCloud environment supports scaling of applications across multiple vendor clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.Comment: 20 pages, 4 figures, 3 tables, conference pape

    Dynamic Task Migration for Enhanced Load Balancing in Cloud Computing using K-means Clustering and Ant Colony Optimization

    Get PDF
    Cloud computing efficiently allocates resources, and timely execution of user tasks is pivotal for ensuring seamless service delivery. Central to this endeavour is the dynamic orchestration of task scheduling and migration, which collectively contribute to load balancing within virtual machines (VMs). Load balancing is a cornerstone, empowering clouds to fulfill user requirements promptly. To facilitate the migration of tasks, we propose a novel method that exploits the synergistic potential of K-means clustering and Ant Colony Optimization (ACO). Our approach aims to maximize the cloud ecosystem by improving several critical factors, such as the system's make time, resource utilization efficiency, and workload imbalance mitigation. The core objective of our work revolves around the reduction of makespan, a metric directly tied to the overall system performance. By strategically employing K-means clustering, we effectively group tasks with similar attributes, enabling the identification of prime candidates for migration. Subsequently, the ACO algorithm takes the reins, orchestrating the migration process with an inherent focus on achieving global optimization. The multifaceted benefits of our approach are quantitatively assessed through comprehensive comparisons with established algorithms, namely Round Robin (RR), First-Come-First-Serve (FCFS), Shortest Job First (SJF), and a genetic load balancing algorithm. To facilitate this evaluation, we harness the capabilities of the CloudSim simulation tool, which provides a platform for realistic and accurate performance analysis. Our research enhances cloud computing paradigms by harmonizing task migration with innovative optimization techniques. The proposed approach demonstrates its prowess in harmonizing diverse goals: reducing makespan, elevating resource utilization efficiency, and attenuating the degree of workload imbalance. These outcomes collectively pave the way for a more responsive and dependable cloud infrastructure primed to cater to user needs with heightened efficacy. Our study delves into the intricate domain of cloud-based task scheduling and migration. By synergizing K-means clustering and ACO algorithms, we introduce a dynamic methodology that refines cloud resource management and bolsters the quintessential facet of load balancing. Through rigorous comparisons and meticulous analysis, we underscore the superior attributes of our approach, showcasing its potential to reshape the landscape of cloud computing optimization
    • …
    corecore