5,527 research outputs found

    Cost-Effective Scheduling and Load Balancing Algorithms in Cloud Computing Using Learning Automata

    Get PDF
    Cloud computing is a distributed computing model in which access is based on demand. A cloud computing environment includes a wide variety of resource suppliers and consumers. Hence, efficient and effective methods for task scheduling and load balancing are required. This paper presents a new approach to task scheduling and load balancing in the cloud computing environment with an emphasis on the cost-efficiency of task execution through resources. The proposed algorithms are based on the fair distribution of jobs between machines, which will prevent the unconventional increase in the price of a machine and the unemployment of other machines. The two parameters Total Cost and Final Cost are designed to achieve the mentioned goal. Applying these two parameters will create a fair basis for job scheduling and load balancing. To implement the proposed approach, learning automata are used as an effective and efficient technique in reinforcement learning. Finally, to show the effectiveness of the proposed algorithms we conducted simulations using CloudSim toolkit and compared proposed algorithms with other existing algorithms like BCO, PES, CJS, PPO and MCT. The proposed algorithms can balance the Final Cost and Total Cost of machines. Also, the proposed algorithms outperform best existing algorithms in terms of efficiency and imbalance degree

    A Schedule of Duties in the Cloud Space Using a Modified Salp Swarm Algorithm

    Full text link
    Cloud computing is a concept introduced in the information technology era, with the main components being the grid, distributed, and valuable computing. The cloud is being developed continuously and, naturally, comes up with many challenges, one of which is scheduling. A schedule or timeline is a mechanism used to optimize the time for performing a duty or set of duties. A scheduling process is accountable for choosing the best resources for performing a duty. The main goal of a scheduling algorithm is to improve the efficiency and quality of the service while at the same time ensuring the acceptability and effectiveness of the targets. The task scheduling problem is one of the most important NP-hard issues in the cloud domain and, so far, many techniques have been proposed as solutions, including using genetic algorithms (GAs), particle swarm optimization, (PSO), and ant colony optimization (ACO). To address this problem, in this paper, one of the collective intelligence algorithms, called the Salp Swarm Algorithm (SSA), has been expanded, improved, and applied. The performance of the proposed algorithm has been compared with that of GAs, PSO, continuous ACO, and the basic SSA. The results show that our algorithm has generally higher performance than the other algorithms. For example, compared to the basic SSA, the proposed method has an average reduction of approximately 21% in makespan.Comment: 15 pages, 6 figures, 2023 IFIP International Internet of Things Conference. Dallas-Fort Worth Metroplex, Texas, US

    Overcommitment in Cloud Services -- Bin packing with Chance Constraints

    Full text link
    This paper considers a traditional problem of resource allocation, scheduling jobs on machines. One such recent application is cloud computing, where jobs arrive in an online fashion with capacity requirements and need to be immediately scheduled on physical machines in data centers. It is often observed that the requested capacities are not fully utilized, hence offering an opportunity to employ an overcommitment policy, i.e., selling resources beyond capacity. Setting the right overcommitment level can induce a significant cost reduction for the cloud provider, while only inducing a very low risk of violating capacity constraints. We introduce and study a model that quantifies the value of overcommitment by modeling the problem as a bin packing with chance constraints. We then propose an alternative formulation that transforms each chance constraint into a submodular function. We show that our model captures the risk pooling effect and can guide scheduling and overcommitment decisions. We also develop a family of online algorithms that are intuitive, easy to implement and provide a constant factor guarantee from optimal. Finally, we calibrate our model using realistic workload data, and test our approach in a practical setting. Our analysis and experiments illustrate the benefit of overcommitment in cloud services, and suggest a cost reduction of 1.5% to 17% depending on the provider's risk tolerance

    A Genetic Algorithm Scheduling Approach for Virtual Machine Resources in a Cloud Computing Environment

    Get PDF
    In the present cloud computing environment, the scheduling approaches for VM (Virtual Machine) resources only focus on the current state of the entire system. Most often they fail to consider the system variation and historical behavioral data which causes system load imbalance. To present a better approach for solving the problem of VM resource scheduling in a cloud computing environment, this project demonstrates a genetic algorithm based VM resource scheduling strategy that focuses on system load balancing. The genetic algorithm approach computes the impact in advance, that it will have on the system after the new VM resource is deployed in the system, by utilizing historical data and current state of the system. It then picks up the solution, which will have the least effect on the system. By doing this it ensures the better load balancing and reduces the number of dynamic VM migrations. The approach presented in this project solves the problem of load imbalance and high migration costs. Usually load imbalance and high number of VM migrations occur if the scheduling is performed using the traditional algorithms

    An Analysis of Efficient and ECO- Friendly Green Cloud Computing Techniques

    Get PDF
    A more noteworthy exertion is expected to build the electrical energy effectiveness of cloud server farms because of the rising interest for distributed computing administrations welcomed on by computerized change and the high versatility of the cloud. This study proposes and surveys an energy-Efficient (EE) system for expanding the adequacy of electrical energy use in server farms. The recommended engineering depends on both the booking of solicitations and the union of servers, rather than depending on just a single system, as in past works that have proactively been distributed. Prior to planning, the EE structure sorts the solicitations (errands) from the clients as per their time and power prerequisites. It has a planning calculation that settles on booking choices while considering power utilization. Furthermore, it includes a combination calculation that recognizes which servers are over-burden, which servers are under stacked and ought to be made it lights-out time or sleep, which servers ought to be moved, and which servers will acknowledge relocated servers. A relocation component for moving relocated virtual machines to new servers is likewise essential for the EE system. Aftereffects of recreation preliminaries show that, concerning power use effectiveness (PUE), data centre energy productivity (DCEP), normal execution time, throughput, and cost investment funds, the EE system is better than approaches that depend on utilizing just a single way to deal with decrease power use

    A SECURE ENERGY EFFICIENT VM PREDICTION AND MIGRATION FRAMEWORK FOR OVERCOMMITED CLOUDS

    Get PDF
    Propose an included, energy efficient, resource allocation framework for overcommitted clouds. The concord makes massive energy investments by 1) minimizing Physical Machine overload occurrences via virtual machine resource usage monitoring and prophecy, and 2) reducing the number of active PMs via efficient VM relocation and residency. Using real Google data consisting of a 29 day traces collected from a crowd together contain more than 12K PMs, we show that our proposed framework outperforms existing overload avoidance techniques and prior VM migration strategies by plummeting the number of unexpected overloads, minimizing migration overhead, increasing resource utilization, and reducing cloud energy consumption.&nbsp

    A Multimedia Cloud Computing Model for Combinatorial Virtual Machine Placement

    Get PDF
    Cloud computing, which allows users to access subscription-based services on a pay-as-you-go basis, has recently transformed IT departments. Today, a variety of media services are offered through the Internet owing to the development of multimedia cloud computing, which is based on cloud computing. However, as multimedia cloud computing spreads, it has a negative influence on greenhouse gas emissions due to its high energy consumption and raises expenses for cloud users. Therefore, while still providing consumers with the resources they require and maintaining a high level of service, multimedia cloud service providers should make every effort to consume as little energy as possible. This proposal proposes residual usage-aware (RUA) and performance-aware (PA) methods for virtual machine placement. To save energy, find a suitable host to switch off. These two techniques were merged and applied to cloud data centers in order to complete the VM consolidation process. The outcomes of the simulation demonstrate a trade-off between energy consumption and SLA violations. Additionally, during VM deployment, it can manage shifting workloads to prevent host overload, dramatically lowering SLA breaches

    ASETS: A SDN Empowered Task Scheduling System for HPCaaS on the Cloud

    Get PDF
    With increasing demands for High Performance Computing (HPC), new ideas and methods are emerged to utilize computing resources more efficiently. Cloud Computing appears to provide benefits such as resource pooling, broad network access and cost efficiency for the HPC applications. However, moving the HPC applications to the cloud can face several key challenges, primarily, the virtualization overhead, multi-tenancy and network latency. Software-Defined Networking (SDN) as an emerging technology appears to pave the road and provide dynamic manipulation of cloud networking such as topology, routing, and bandwidth allocation. This paper presents a new scheme called ASETS which targets dynamic configuration and monitoring of cloud networking using SDN to improve the performance of HPC applications and in particular task scheduling for HPC as a service on the cloud (HPCaaS). Further, SETSA, (SDN-Empowered Task Scheduler Algorithm) is proposed as a novel task scheduling algorithm for the offered ASETS architecture. SETSA monitors the network bandwidth to take advantage of its changes when submitting tasks to the virtual machines. Empirical analysis of the algorithm in different case scenarios show that SETSA has significant potentials to improve the performance of HPCaaS platforms by increasing the bandwidth efficiency and decreasing task turnaround time. In addition, SETSAW, (SETSA Window) is proposed as an improvement of the SETSA algorithm

    Capuchin Search Particle Swarm Optimization (CS-PSO) based Optimized Approach to Improve the QoS Provisioning in Cloud Computing Environment

    Get PDF
    This review introduces the methods for further enhancing resource assignment in distributed computing situations taking into account QoS restrictions. While resource distribution typically affects the quality of service (QoS) of cloud organizations, QoS constraints such as response time, throughput, hold-up time, and makespan are key factors to take into account. The approach makes use of a methodology from the Capuchin Search Particle Large Number Improvement (CS-PSO) apparatus to smooth out resource designation while taking QoS constraints into account. Throughput, reaction time, makespan, holding time, and resource use are just a few of the objectives the approach works on. The method divides the resources in an optimum way using the K-medoids batching scheme. During batching, projects are divided into two-pack assembles, and the resource segment method is enhanced to obtain the optimal configuration. The exploratory association makes use of the JAVA device and the GWA-T-12 Bitbrains dataset for replication. The outrageous worth advancement problem of the multivariable capacity is addressed using the superior calculation. The simulation findings demonstrate that the core (Cloud Molecule Multitude Improvement, CPSO) computation during 500 ages has not reached assembly repeatedly, repeatedly, repeatedly, and repeatedly, respectively.The connection analysis reveals that the developed model outperforms the state-of-the-art approaches. Generally speaking, this approach provides significant areas of strength for a successful procedure for improving resource designation in distributed processing conditions and can be applied to address a variety of resource segment challenges, such as virtual machine setup, work arranging, and resource allocation. Because of this, the capuchin search molecule enhancement algorithm (CSPSO) ensures the success of the improvement measures, such as minimal streamlined polynomial math, rapid consolidation speed, high productivity, and a wide variety of people
    • …
    corecore