2,300 research outputs found

    Cloud computing resource scheduling and a survey of its evolutionary approaches

    Get PDF
    A disruptive technology fundamentally transforming the way that computing services are delivered, cloud computing offers information and communication technology users a new dimension of convenience of resources, as services via the Internet. Because cloud provides a finite pool of virtualized on-demand resources, optimally scheduling them has become an essential and rewarding topic, where a trend of using Evolutionary Computation (EC) algorithms is emerging rapidly. Through analyzing the cloud computing architecture, this survey first presents taxonomy at two levels of scheduling cloud resources. It then paints a landscape of the scheduling problem and solutions. According to the taxonomy, a comprehensive survey of state-of-the-art approaches is presented systematically. Looking forward, challenges and potential future research directions are investigated and invited, including real-time scheduling, adaptive dynamic scheduling, large-scale scheduling, multiobjective scheduling, and distributed and parallel scheduling. At the dawn of Industry 4.0, cloud computing scheduling for cyber-physical integration with the presence of big data is also discussed. Research in this area is only in its infancy, but with the rapid fusion of information and data technology, more exciting and agenda-setting topics are likely to emerge on the horizon

    A Review on Energy Consumption Optimization Techniques in IoT Based Smart Building Environments

    Get PDF
    In recent years, due to the unnecessary wastage of electrical energy in residential buildings, the requirement of energy optimization and user comfort has gained vital importance. In the literature, various techniques have been proposed addressing the energy optimization problem. The goal of each technique was to maintain a balance between user comfort and energy requirements such that the user can achieve the desired comfort level with the minimum amount of energy consumption. Researchers have addressed the issue with the help of different optimization algorithms and variations in the parameters to reduce energy consumption. To the best of our knowledge, this problem is not solved yet due to its challenging nature. The gap in the literature is due to the advancements in the technology and drawbacks of the optimization algorithms and the introduction of different new optimization algorithms. Further, many newly proposed optimization algorithms which have produced better accuracy on the benchmark instances but have not been applied yet for the optimization of energy consumption in smart homes. In this paper, we have carried out a detailed literature review of the techniques used for the optimization of energy consumption and scheduling in smart homes. The detailed discussion has been carried out on different factors contributing towards thermal comfort, visual comfort, and air quality comfort. We have also reviewed the fog and edge computing techniques used in smart homes

    Virtual Machine Deployment Strategy Based on Improved PSO in Cloud Computing

    Get PDF
    Energy consumption is an important cost driven by growth of computing power, thereby energy conservation has become one of the major problems faced by cloud system. How to maximize the utilization of physical machines, reduce the number of virtual machine migrations, and maintain load balance under the constraints of physical machine resource thresholds that is the effective way to implement energy saving in data center. In the paper, we propose a multi-objective physical model for virtual machine deployment. Then the improved multi-objective particle swarm optimization (TPSO) is applied to virtual machine deployment. Compared to other algorithms, the algorithm has better ergodicity into the initial stage, improves the optimization precision and optimization efficiency of the particle swarm. The experimental results based on CloudSim simulation platform show that the algorithm is effective at improving physical machine resource utilization, reducing resource waste, and improving system load balance

    A WOA-based optimization approach for task scheduling in cloud Computing systems

    Get PDF
    Task scheduling in cloud computing can directly affect the resource usage and operational cost of a system. To improve the efficiency of task executions in a cloud, various metaheuristic algorithms, as well as their variations, have been proposed to optimize the scheduling. In this work, for the first time, we apply the latest metaheuristics WOA (the whale optimization algorithm) for cloud task scheduling with a multiobjective optimization model, aiming at improving the performance of a cloud system with given computing resources. On that basis, we propose an advanced approach called IWC (Improved WOA for Cloud task scheduling) to further improve the optimal solution search capability of the WOA-based method. We present the detailed implementation of IWC and our simulation-based experiments show that the proposed IWC has better convergence speed and accuracy in searching for the optimal task scheduling plans, compared to the current metaheuristic algorithms. Moreover, it can also achieve better performance on system resource utilization, in the presence of both small and large-scale tasks

    Deadline Constrained Cloud Computing Resources Scheduling through an Ant Colony System Approach

    Get PDF
    Cloud computing resources scheduling is essential for executing workflows in the cloud platform because it relates to both execution time and execution cost. In this paper, we adopt a model that optimizes the execution cost while meeting deadline constraints. In solving this problem, we propose an Improved Ant Colony System (IACS) approach featuring two novel strategies. Firstly, a dynamic heuristic strategy is used to calculate a heuristic value during an evolutionary process by taking the workflow topological structure into consideration. Secondly, a double search strategy is used to initialize the pheromone and calculate the heuristic value according to the execution time at the beginning and to initialize the pheromone and calculate heuristic value according to the execution cost after a feasible solution is found. Therefore, the proposed IACS is adaptive to the search environment and to different objectives. We have conducted extensive experiments based on workflows with different scales and different cloud resources. We compare the result with a particle swarm optimization (PSO) approach and a dynamic objective genetic algorithm (DOGA) approach. Experimental results show that IACS is able to find better solutions with a lower cost than both PSO and DOGA do on various scheduling scales and deadline conditions

    Virtual machine-based task scheduling algorithm in a cloud computing environment

    Get PDF
    Virtualization technology has been widely used to virtualize single server into multiple servers, which not only creates an operating environment for a virtual machine-based cloud computing platform but also potentially improves its efficiency. Currently, most task scheduling-based algorithms used in cloud computing environments are slow to convergence or easily fall into a local optimum. This paper introduces a Greedy Particle Swarm Optimization (G&PSO) based algorithm to solve the task scheduling problem. It uses a greedy algorithm to quickly solve the initial particle value of a particle swarm optimization algorithm derived from a virtual machine-based cloud platform. The archived experimental results show that the algorithm exhibits better performance such as a faster convergence rate, stronger local and global search capabilities, and a more balanced workload on each virtual machine. Therefore, the G&PSO algorithm demonstrates improved virtual machine efficiency and resource utilization compared with the traditional particle swarm optimization algorithm

    A Hybrid Optimization Algorithm for Efficient Virtual Machine Migration and Task Scheduling Using a Cloud-Based Adaptive Multi-Agent Deep Deterministic Policy Gradient Technique

    Get PDF
    This To achieve optimal system performance in the quickly developing field of cloud computing, efficient resource management—which includes accurate job scheduling and optimized Virtual Machine (VM) migration—is essential. The Adaptive Multi-Agent System with Deep Deterministic Policy Gradient (AMS-DDPG) Algorithm is used in this study to propose a cutting-edge hybrid optimization algorithm for effective virtual machine migration and task scheduling. An sophisticated combination of the War Strategy Optimization (WSO) and Rat Swarm Optimizer (RSO) algorithms, the Iterative Concept of War and Rat Swarm (ICWRS) algorithm is the foundation of this technique. Notably, ICWRS optimizes the system with an amazing 93% accuracy, especially for load balancing, job scheduling, and virtual machine migration. The VM migration and task scheduling flexibility and efficiency are greatly improved by the AMS-DDPG technology, which uses a powerful combination of deterministic policy gradient and deep reinforcement learning. By assuring the best possible resource allocation, the Adaptive Multi-Agent System method enhances decision-making even more. Performance in cloud-based virtualized systems is significantly enhanced by our hybrid method, which combines deep learning and multi-agent coordination. Extensive tests that include a detailed comparison with conventional techniques verify the effectiveness of the suggested strategy. As a consequence, our hybrid optimization approach is successful. The findings show significant improvements in system efficiency, shorter job completion times, and optimum resource utilization. Cloud-based systems have unrealized potential for synergistic optimization, as shown by the integration of ICWRS inside the AMS-DDPG framework. Enabling a high-performing and sustainable cloud computing infrastructure that can adapt to the changing needs of modern computing paradigms is made possible by this strategic resource allocation, which is attained via careful computational utilization

    Hybrid scheduling algorithms in cloud computing: a review

    Get PDF
    Cloud computing is one of the emerging fields in computer science due to its several advancements like on-demand processing, resource sharing, and pay per use. There are several cloud computing issues like security, quality of service (QoS) management, data center energy consumption, and scaling. Scheduling is one of the several challenging problems in cloud computing, where several tasks need to be assigned to resources to optimize the quality of service parameters. Scheduling is a well-known NP-hard problem in cloud computing. This will require a suitable scheduling algorithm. Several heuristics and meta-heuristics algorithms were proposed for scheduling the user's task to the resources available in cloud computing in an optimal way. Hybrid scheduling algorithms have become popular in cloud computing. In this paper, we reviewed the hybrid algorithms, which are the combinations of two or more algorithms, used for scheduling in cloud computing. The basic idea behind the hybridization of the algorithm is to take useful features of the used algorithms. This article also classifies the hybrid algorithms and analyzes their objectives, quality of service (QoS) parameters, and future directions for hybrid scheduling algorithms

    Capuchin Search Particle Swarm Optimization (CS-PSO) based Optimized Approach to Improve the QoS Provisioning in Cloud Computing Environment

    Get PDF
    This review introduces the methods for further enhancing resource assignment in distributed computing situations taking into account QoS restrictions. While resource distribution typically affects the quality of service (QoS) of cloud organizations, QoS constraints such as response time, throughput, hold-up time, and makespan are key factors to take into account. The approach makes use of a methodology from the Capuchin Search Particle Large Number Improvement (CS-PSO) apparatus to smooth out resource designation while taking QoS constraints into account. Throughput, reaction time, makespan, holding time, and resource use are just a few of the objectives the approach works on. The method divides the resources in an optimum way using the K-medoids batching scheme. During batching, projects are divided into two-pack assembles, and the resource segment method is enhanced to obtain the optimal configuration. The exploratory association makes use of the JAVA device and the GWA-T-12 Bitbrains dataset for replication. The outrageous worth advancement problem of the multivariable capacity is addressed using the superior calculation. The simulation findings demonstrate that the core (Cloud Molecule Multitude Improvement, CPSO) computation during 500 ages has not reached assembly repeatedly, repeatedly, repeatedly, and repeatedly, respectively.The connection analysis reveals that the developed model outperforms the state-of-the-art approaches. Generally speaking, this approach provides significant areas of strength for a successful procedure for improving resource designation in distributed processing conditions and can be applied to address a variety of resource segment challenges, such as virtual machine setup, work arranging, and resource allocation. Because of this, the capuchin search molecule enhancement algorithm (CSPSO) ensures the success of the improvement measures, such as minimal streamlined polynomial math, rapid consolidation speed, high productivity, and a wide variety of people
    • …
    corecore