50 research outputs found
Cloud Servers: Resource Optimization Using Different Energy Saving Techniques
Currently, researchers are working to contribute to the emerging fields of cloud computing, edge computing, and distributed systems. The major area of interest is to examine and understand their performance. The major globally leading companies, such as Google, Amazon, ONLIVE, Giaki, and eBay, are truly concerned about the impact of energy consumption. These cloud computing companies use huge data centers, consisting of virtual computers that are positioned worldwide and necessitate exceptionally high-power costs to preserve. The increased requirement for energy consumption in IT firms has posed many challenges for cloud computing companies pertinent to power expenses. Energy utilization is reliant upon numerous aspects, for example, the service level agreement, techniques for choosing the virtual machine, the applied optimization strategies and policies, and kinds of workload. The present paper tries to provide an answer to challenges related to energy-saving through the assistance of both dynamic voltage and frequency scaling techniques for gaming data centers. Also, to evaluate both the dynamic voltage and frequency scaling techniques compared to non-power-aware and static threshold detection techniques. The findings will facilitate service suppliers in how to encounter the quality of service and experience limitations by fulfilling the service level agreements. For this purpose, the CloudSim platform is applied for the application of a situation in which game traces are employed as a workload for analyzing the procedure. The findings evidenced that an assortment of good quality techniques can benefit gaming servers to conserve energy expenditures and sustain the best quality of service for consumers located universally. The originality of this research presents a prospect to examine which procedure performs good (for example, dynamic, static, or non-power aware). The findings validate that less energy is utilized by applying a dynamic voltage and frequency method along with fewer service level agreement violations, and better quality of service and experience, in contrast with static threshold consolidation or non-power aware technique
Energy-efficient Nature-Inspired techniques in Cloud computing datacenters
Cloud computing is a systematic delivery of computing resources as services to the consumers via the Internet. Infrastructure
as a Service (IaaS) is the capability provided to the consumer by enabling smarter access to the processing, storage,
networks, and other fundamental computing resources, where the consumer can deploy and run arbitrary software including
operating systems and applications. The resources are sometimes available in the form of Virtual Machines (VMs). Cloud
services are provided to the consumers based on the demand, and are billed accordingly. Usually, the VMs run on various
datacenters, which comprise of several computing resources consuming lots of energy resulting in hazardous level of carbon
emissions into the atmosphere. Several researchers have proposed various energy-efficient methods for reducing the energy
consumption in datacenters. One such solutions are the Nature-Inspired algorithms. Towards this end, this paper presents a
comprehensive review of the state-of-the-art Nature-Inspired algorithms suggested for solving the energy issues in the Cloud
datacenters. A taxonomy is followed focusing on three key dimension in the literature including virtualization, consolidation,
and energy-awareness. A qualitative review of each techniques is carried out considering key goal, method, advantages, and
limitations. The Nature-Inspired algorithms are compared based on their features to indicate their utilization of resources
and their level of energy-efficiency. Finally, potential research directions are identified in energy optimization in data centers.
This review enable the researchers and professionals in Cloud computing datacenters in understanding literature evolution
towards to exploring better energy-efficient methods for Cloud computing datacenters
Energy and Performance: Management of Virtual Machines: Provisioning, Placement, and Consolidation
Cloud computing is a new computing paradigm that offers scalable storage and compute resources to users on demand through Internet. Public cloud providers operate large-scale data centers around the world to handle a large number of users request. However, data centers consume an immense amount of electrical energy that can lead to high operating costs and carbon emissions. One of the most common and effective method in order to reduce energy consumption is Dynamic Virtual Machines Consolidation (DVMC) enabled by the virtualization technology. DVMC dynamically consolidates Virtual Machines (VMs) into the minimum number of active servers and then switches the idle servers into a power-saving mode to save energy. However, maintaining the desired level of Quality-of-Service (QoS) between data centers and their users is critical for satisfying users’ expectations concerning performance. Therefore, the main challenge is to minimize the data center energy consumption while maintaining the required QoS.
This thesis address this challenge by presenting novel DVMC approaches to reduce the energy consumption of data centers and improve resource utilization under workload independent quality of service constraints. These approaches can be divided into three main categories: heuristic, meta-heuristic and machine learning.
Our first contribution is a heuristic algorithm for solving the DVMC problem. The algorithm uses a linear regression-based prediction model to detect over-loaded servers based on the historical utilization data. Then it migrates some VMs from the over-loaded servers to avoid further performance degradations. Moreover, our algorithm consolidates VMs on fewer number of server for energy saving. The second and third contributions are two novel DVMC algorithms based on the Reinforcement Learning (RL) approach. RL is interesting for highly adaptive and autonomous management in dynamic environments. For this reason, we use RL to solve two main sub-problems in VM consolidation. The first sub-problem is the server power mode detection (sleep or active). The second sub-problem is to find an effective solution for server status detection (overloaded or non-overloaded). The fourth contribution of this thesis is an online optimization meta-heuristic algorithm called Ant Colony System-based Placement Optimization (ACS-PO). ACS is a suitable approach for VM consolidation due to the ease of parallelization, that it is close to the optimal solution, and its polynomial worst-case time complexity. The simulation results show that ACS-PO provides substantial improvement over other heuristic algorithms in reducing energy consumption, the number of VM migrations, and performance degradations.
Our fifth contribution is a Hierarchical VM management (HiVM) architecture based on a three-tier data center topology which is very common use in data centers. HiVM has the ability to scale across many thousands of servers with energy efficiency. Our sixth contribution is a Utilization Prediction-aware Best Fit Decreasing (UP-BFD) algorithm. UP-BFD can avoid SLA violations and needless migrations by taking into consideration the current and predicted future resource requirements for allocation, consolidation, and placement of VMs.
Finally, the seventh and the last contribution is a novel Self-Adaptive Resource Management System (SARMS) in data centers. To achieve scalability, SARMS uses a hierarchical architecture that is partially inspired from HiVM. Moreover, SARMS provides self-adaptive ability for resource management by dynamically adjusting the utilization thresholds for each server in data centers.Siirretty Doriast
Efficient Energy Management in Cloud Data center using VM Consolidation
Cloud computing is a model which can fast provisioned and released the computing resources by using minimum number of management effort. This can be done by the user without doing any communication with the cloud service providers. Cloud provide the computing resources, on-demand network access which is pooled together and it can be provisioned dynamically according to the user needs. Due to the large application, more number of computing nodes are required. A large amount of electrical energy is consumed due to the establishment of the data center. There is a problem of carbon dioxide emissions and increasing cost of operation due to the formation of large data center. A consolidation of virtual machines technique is proposed in our thesis to reduce the energy consumption and to maximize the utilization of the computing resources in the data center. Several virtual machines are taken together into a single physical machine in the consolidation technique and it helps to decrease the consumption of energy by putting idle server into inactive mode. A number of active hosts is minimized by continuously reallocating VMs using live migration. In each migration, Service Level Agreement(SLA) violations may occur, hence it is required to reduce the number of migrations.In order to satisfy quality of services in cloud computing environment, our proposed techniques mainly performs the following functions:(i)reducing the consumption of energy, (ii) minimize the number of migrations and (iii) minimize the percentage of SLA violations. Initially we detect whether any host is overloaded or not. The Overloaded host is detected by considering CPU utilization as a threshold Value. If an overloaded host is detected then some virtual machines are migrated from it by using VM selection policy. After selection of the VMs, the next step is to place the new VMs. For VM placement, the greedy algorithms such as Best Fit Decreasing(BFD) and Modified First Fit Decreasing(MFFD) are used in this thesis. The proposed techniques are compared with the existing EEDVM and PALVM techniques. Using proposed AUTREC technique there is 8% improved in energy consumption, 3% in number of migrations, 10% in SLA violation and 12% in host shutdown as compared to EEDVM technique. Using proposed DUTREC technique there is 9% improved in energy consumption, 6% in number of migrations, 20% in SLA violation and 13% in host shutdown as compared to PALVM technique
Machine Learning Centered Energy Optimization In Cloud Computing: A Review
The rapid growth of cloud computing has led to a significant increase in energy consumption, which is a major concern for the environment and economy. To address this issue, researchers have proposed various techniques to improve the energy efficiency of cloud computing, including the use of machine learning (ML) algorithms. This research provides a comprehensive review of energy efficiency in cloud computing using ML techniques and extensively compares different ML approaches in terms of the learning model adopted, ML tools used, model strengths and limitations, datasets used, evaluation metrics and performance. The review categorizes existing approaches into Virtual Machine (VM) selection, VM placement, VM migration, and consolidation methods. This review highlights that among the array of ML models, Deep Reinforcement Learning, TensorFlow as a platform, and CloudSim for dataset generation are the most widely adopted in the literature and emerge as the best choices for constructing ML-driven models that optimize energy consumption in cloud computing
A Literature Survey on Resource Management Techniques, Issues and Challenges in Cloud Computing
Cloud computing is a large scale distributed computing which provides on demand services for clients. Cloud Clients use web browsers, mobile apps, thin clients, or terminal emulators to request and control their cloud resources at any time and anywhere through the network. As many companies are shifting their data to cloud and as many people are being aware of the advantages of storing data to cloud, there is increasing number of cloud computing infrastructure and large amount of data which lead to the complexity management for cloud providers. We surveyed the state-of-the-art resource management techniques for IaaS (infrastructure as a service) in cloud computing. Then we put forward different major issues in the deployment of the cloud infrastructure in order to avoid poor service delivery in cloud computing
Allocation of Virtual Machines in Cloud Data Centers - A Survey of Problem Models and Optimization Algorithms
Data centers in public, private, and hybrid cloud settings make it possible to provision virtual machines
(VMs) with unprecedented flexibility. However, purchasing, operating, and maintaining the underlying physical
resources incurs significant monetary costs and also environmental impact. Therefore, cloud providers must
optimize the usage of physical resources by a careful allocation of VMs to hosts, continuously balancing between
the conflicting requirements on performance and operational costs. In recent years, several algorithms have been
proposed for this important optimization problem. Unfortunately, the proposed approaches are hardly comparable
because of subtle differences in the used problem models. This paper surveys the used problem formulations and
optimization algorithms, highlighting their strengths and limitations, also pointing out the areas that need further
research in the future
Energy and Performance Management of Virtual Machines: Provisioning, Placement and Consolidation
Cloud computing is a new computing paradigm that offers scalable storage
and compute resources to users on demand through Internet. Public cloud
providers operate large-scale data centers around the world to handle a
large number of users request. However, data centers consume an immense
amount of electrical energy that can lead to high operating costs and carbon
emissions. One of the most common and effective method in order to reduce
energy consumption is Dynamic Virtual Machines Consolidation (DVMC)
enabled by the virtualization technology. DVMC dynamically consolidates
Virtual Machines (VMs) into the minimum number of active servers and
then switches the idle servers into a power-saving mode to save energy. Ho-
wever, maintaining the desired level of Quality-of-Service (QoS) between
data centers and their users is critical for satisfying users’ expectations con-
cerning performance. Therefore, the main challenge is to minimize the data
center energy consumption while maintaining the required QoS.
This thesis address this challenge by presenting novel DVMC approaches
to reduce the energy consumption of data centers and improve resource utili-
zation under workload independent quality of service constraints. These ap-
proaches can be divided into three main categories: heuristic, meta-heuristic
and machine learning.
Our first contribution is a heuristic algorithm for solving the DVMC
problem. The algorithm uses a linear regression-based prediction model to
detect over-loaded servers based on the historical utilization data. Then it
migrates some VMs from the over-loaded servers to avoid further performan-
ce degradations. Moreover, our algorithm consolidates VMs on fewer number
of server for energy saving. The second and third contributions are two novel
DVMC algorithms based on the Reinforcement Learning (RL) approach. RL
is interesting for highly adaptive and autonomous management in dynamic
environments. For this reason, we use RL to solve two main sub-problems in
VM consolidation. The first sub-problem is the server power mode detection
(sleep or active). The second sub-problem is to find an effective solution
for server status detection (overloaded or non-overloaded). The fourth con-
tribution of this thesis is an online optimization meta-heuristic algorithm
called Ant Colony System-based Placement Optimization (ACS-PO). ACS is a suitable approach for VM consolidation due to the ease of parallelization,
that it is close to the optimal solution, and its polynomial worst-case time
complexity. The simulation results show that ACS-PO provides substantial
improvement over other heuristic algorithms in reducing energy consump-
tion, the number of VM migrations, and performance degradations.
Our fifth contribution is a Hierarchical VM management (HiVM) archi-
tecture based on a three-tier data center topology which is very common use
in data centers. HiVM has the ability to scale across many thousands of ser-
vers with energy efficiency. Our sixth contribution is a Utilization Prediction-
aware Best Fit Decreasing (UP-BFD) algorithm. UP-BFD can avoid SLA
violations and needless migrations by taking into consideration the current
and predicted future resource requirements for allocation, consolidation, and
placement of VMs.
Finally, the seventh and the last contribution is a novel Self-Adaptive
Resource Management System (SARMS) in data centers. To achieve scala-
bility, SARMS uses a hierarchical architecture that is partially inspired from
HiVM. Moreover, SARMS provides self-adaptive ability for resource mana-
gement by dynamically adjusting the utilization thresholds for each server
in data centers.
</div