25,912 research outputs found

    Security Aware Virtual Machine Allocation Policy to Improve QoS

    Get PDF
    Cloud service providers find managing the energy consumption for datacentres as a critical operation. Significant energy is being used by a rising spike in the number of data centres. To overcome this challenge datacentres, attempt to reduce the number of active physical servers by carrying out virtual machine consolidation process. However, due to inadequate security measures to verify hostile cloud users, the security threats on cloud multitenancy platform have escalated.  In this paper we propose energy efficient virtual machine consolidation using priority-based security aware virtual machine allocation policy to improve datacentre security. The proposed security solution considers the host threat score before virtual machine placement, which has reduced the security threats for co-residency attacks without impacting datacentre energy consumption

    EPOBF: Energy Efficient Allocation of Virtual Machines in High Performance Computing Cloud

    Full text link
    Cloud computing has become more popular in provision of computing resources under virtual machine (VM) abstraction for high performance computing (HPC) users to run their applications. A HPC cloud is such cloud computing environment. One of challenges of energy efficient resource allocation for VMs in HPC cloud is tradeoff between minimizing total energy consumption of physical machines (PMs) and satisfying Quality of Service (e.g. performance). On one hand, cloud providers want to maximize their profit by reducing the power cost (e.g. using the smallest number of running PMs). On the other hand, cloud customers (users) want highest performance for their applications. In this paper, we focus on the scenario that scheduler does not know global information about user jobs and user applications in the future. Users will request shortterm resources at fixed start times and non interrupted durations. We then propose a new allocation heuristic (named Energy-aware and Performance per watt oriented Bestfit (EPOBF)) that uses metric of performance per watt to choose which most energy-efficient PM for mapping each VM (e.g. maximum of MIPS per Watt). Using information from Feitelson's Parallel Workload Archive to model HPC jobs, we compare the proposed EPOBF to state of the art heuristics on heterogeneous PMs (each PM has multicore CPU). Simulations show that the EPOBF can reduce significant total energy consumption in comparison with state of the art allocation heuristics.Comment: 10 pages, in Procedings of International Conference on Advanced Computing and Applications, Journal of Science and Technology, Vietnamese Academy of Science and Technology, ISSN 0866-708X, Vol. 51, No. 4B, 201

    Cloud Host Selection using Iterative Particle-Swarm Optimization for Dynamic Container Consolidation

    Get PDF
    A significant portion of the energy consumption in cloud data centres can be attributed to the inefficient utilization of available resources due to the lack of dynamic resource allocation techniques such as virtual machine migration and workload consolidation strategies to better optimize the utilization of resources. We present a new method for optimizing cloud data centre management by combining virtual machine migration with workload consolidation. Our proposed Energy Efficient Particle Swarm Optimization (EE-PSO) algorithm to improve resource utilization and reduce energy consumption. We carried out experimental evaluations with the Container CloudSim toolkit to demonstrate the effectiveness of the proposed EE-PSO algorithm in terms of energy consumption, quality of service guarantees, the number of newly created VMs, and container migrations

    A smart resource management mechanism with trust access control for cloud computing environment

    Full text link
    The core of the computer business now offers subscription-based on-demand services with the help of cloud computing. We may now share resources among multiple users by using virtualization, which creates a virtual instance of a computer system running in an abstracted hardware layer. It provides infinite computing capabilities through its massive cloud datacenters, in contrast to early distributed computing models, and has been incredibly popular in recent years because to its continually growing infrastructure, user base, and hosted data volume. This article suggests a conceptual framework for a workload management paradigm in cloud settings that is both safe and performance-efficient. A resource management unit is used in this paradigm for energy and performing virtual machine allocation with efficiency, assuring the safe execution of users' applications, and protecting against data breaches brought on by unauthorised virtual machine access real-time. A secure virtual machine management unit controls the resource management unit and is created to produce data on unlawful access or intercommunication. Additionally, a workload analyzer unit works simultaneously to estimate resource consumption data to help the resource management unit be more effective during virtual machine allocation. The suggested model functions differently to effectively serve the same objective, including data encryption and decryption prior to transfer, usage of trust access mechanism to prevent unauthorised access to virtual machines, which creates extra computational cost overhead

    Energy Efficient Multiresource Allocation of Virtual Machine Based on PSO in Cloud Data Center

    Get PDF
    Presently, massive energy consumption in cloud data center tends to be an escalating threat to the environment. To reduce energy consumption in cloud data center, an energy efficient virtual machine allocation algorithm is proposed in this paper based on a proposed energy efficient multiresource allocation model and the particle swarm optimization (PSO) method. In this algorithm, the fitness function of PSO is defined as the total Euclidean distance to determine the optimal point between resource utilization and energy consumption. This algorithm can avoid falling into local optima which is common in traditional heuristic algorithms. Compared to traditional heuristic algorithms MBFD and MBFH, our algorithm shows significantly energy savings in cloud data center and also makes the utilization of system resources reasonable at the same time

    A Study Resource Optimization Techniques Based Job Scheduling in Cloud Computing

    Get PDF
    Cloud computing has revolutionized the way businesses and individuals utilize computing resources. It offers on-demand access to a vast pool of virtualized resources, such as processing power, storage, and networking, through the Internet. One of the key challenges in cloud computing is efficiently scheduling jobs to maximize resource utilization and minimize costs. Job scheduling in cloud computing involves allocating tasks or jobs to available resources in an optimal manner. The objective is to minimize job completion time, maximize resource utilization, and meet various performance metrics such as response time, throughput, and energy consumption. Resource optimization techniques play a crucial role in achieving these objectives. Resource optimization techniques aim to efficiently allocate resources to jobs, taking into account factors like resource availability, job priorities, and constraints. These techniques utilize various algorithms and optimization approaches to make intelligent decisions about resource allocation. Research on resource optimization techniques for job scheduling in cloud computing is of significant importance due to the following reasons: Efficient Resource Utilization: Cloud computing environments consist of a large number of resources that need to be utilized effectively to maximize cost savings and overall system performance. By optimizing job scheduling, researchers can develop algorithms and techniques that ensure efficient utilization of resources, leading to improved productivity and reduced costs. Performance Improvement: Job scheduling plays a crucial role in meeting performance metrics such as response time, throughput, and reliability. By designing intelligent scheduling algorithms, researchers can improve the overall system performance, leading to better user experience and customer satisfaction. Scalability: Cloud computing environments are highly scalable, allowing users to dynamically scale resources based on their needs. Effective job scheduling techniques enable efficient resource allocation and scaling, ensuring that the system can handle varying workloads without compromising performance. Energy Efficiency: Cloud data centres consume significant amounts of energy, and optimizing resource allocation can contribute to energy conservation. By scheduling jobs intelligently, researchers can reduce energy consumption, leading to environmental benefits and cost savings for cloud service providers. Quality of Service (QoS): Cloud computing service providers often have service-level agreements (SLAs) that define the QoS requirements expected by users. Resource optimization techniques for job scheduling can help meet these SLAs by ensuring that jobs are allocated resources in a timely manner, meeting performance guarantees, and maintaining high service availability. Here in this research, we have used the method of the weighted product model (WPM). For the topic of Resource Optimization Techniques Based Job Scheduling in Cloud Computing For calculating the values of alternative and evaluation parameters. A variation of the WSM called the weighted product method (WPM) has been proposed to address some of the weaknesses of The WSM that came before it. The main distinction is that the multiplication is being used in place of additional. The terms "scoring methods" are frequently used to describe WSM and WPM Execution time on Virtual machine, Transmission time (delay)on Virtual machine, Processing cost of a task on virtual machine resource optimization techniques based on job scheduling play a crucial role in maximizing the efficiency and performance of cloud computing systems. By effectively managing and allocating resources, these techniques help minimize costs, reduce energy consumption, and improve overall system throughput. One of the key findings is that intelligent job scheduling algorithms, such as genetic algorithms, ant colony optimization

    Hybrid Approach for Resource Allocation in Cloud Infrastructure Using Random Forest and Genetic Algorithm

    Get PDF
    In cloud computing, the virtualization technique is a significant technology to optimize the power consumption of the cloud data center. In this generation, most of the services are moving to the cloud resulting in increased load on data centers. As a result, the size of the data center grows and hence there is more energy consumption. To resolve this issue, an efficient optimization algorithm is required for resource allocation. In this work, a hybrid approach for virtual machine allocation based on genetic algorithm (GA) and the random forest (RF) is proposed which belongs to a class of supervised machine learning techniques. The aim of the work is to minimize power consumption while maintaining better load balance among available resources and maximizing resource utilization. The proposed model used a genetic algorithm to generate a training dataset for the random forest model and further get a trained model. The real-time workload traces from PlanetLab are used to evaluate the approach. The results showed that the proposed GA-RF model improves energy consumption, execution time, and resource utilization of the data center and hosts as compared to the existing models. The work used power consumption, execution time, resource utilization, average start time, and average finish time as performance metrics

    Allocation and migration of virtual machines using machine learning

    Get PDF
    Cloud computing promises the advent of a new era of service boosted by means of virtualization technology. The process of virtualization means creation of virtual infrastructure, devices, servers and computing resources needed to deploy an application smoothly. This extensively practiced technology involves selecting an efficient Virtual Machine (VM) to complete the task by transferring applications from Physical Machines (PM) to VM or from VM to VM. The whole process is very challenging not only in terms of computation but also in terms of energy and memory. This research paper presents an energy aware VM allocation and migration approach to meet the challenges faced by the growing number of cloud data centres. Machine Learning (ML) based Artificial Bee Colony (ABC) is used to rank the VM with respect to the load while considering the energy efficiency as a crucial parameter. The most efficient virtual machines are further selected and thus depending on the dynamics of the load and energy, applications are migrated from one VM to another. The simulation analysis is performed in Matlab and it shows that this research work results in more reduction in energy consumption as compared to existing studies

    PIASA: A power and interference aware resource management strategy for heterogeneous workloads in cloud data centers

    Get PDF
    Cloud data centers have been progressively adopted in different scenarios, as reflected in the execution of heterogeneous applications with diverse workloads and diverse quality of service (QoS) requirements. Virtual machine (VM) technology eases resource management in physical servers and helps cloud providers achieve goals such as optimization of energy consumption. However, the performance of an application running inside a VM is not guaranteed due to the interference among co-hosted workloads sharing the same physical resources. Moreover, the different types of co-hosted applications with diverse QoS requirements as well as the dynamic behavior of the cloud makes efficient provisioning of resources even more difficult and a challenging problem in cloud data centers. In this paper, we address the problem of resource allocation within a data center that runs different types of application workloads, particularly CPU- and network-intensive applications. To address these challenges, we propose an interference- and power-aware management mechanism that combines a performance deviation estimator and a scheduling algorithm to guide the resource allocation in virtualized environments. We conduct simulations by injecting synthetic workloads whose characteristics follow the last version of the Google Cloud tracelogs. The results indicate that our performance-enforcing strategy is able to fulfill contracted SLAs of real-world environments while reducing energy costs by as much as 21%
    corecore