7 research outputs found
Energy-aware scheduling in virtualized datacenters
The reduction of energy consumption in large-scale datacenters is being accomplished through an extensive use of virtualization, which enables the consolidation of multiple workloads
in a smaller number of machines. Nevertheless, virtualization also
incurs some additional overheads (e.g. virtual machine creation and migration) that can influence what is the best consolidated
configuration, and thus, they must be taken into account. In this paper, we present a dynamic job scheduling policy for
power-aware resource allocation in a virtualized datacenter. Our policy tries to consolidate workloads from separate machines
into a smaller number of nodes, while fulfilling the amount of hardware resources needed to preserve the quality of service
of each job. This allows turning off the spare servers, thus reducing the overall datacenter power consumption. As a novelty,
this policy incorporates all the virtualization overheads in the decision process. In addition, our policy is prepared to consider other important parameters for a datacenter, such as reliability or dynamic SLA enforcement, in a synergistic way with power consumption. The introduced policy is evaluated comparing it against common policies in a simulated environment that
accurately models HPC jobs execution in a virtualized datacenter including power consumption modeling and obtains a power
consumption reduction of 15% with respect to typical policies.Peer ReviewedPostprint (published version
Resource-level QoS metric for CPU-based guarantees in cloud providers
Success of Cloud computing requires that both customers and providers can be confident that signed Service Level Agreements (SLA) are supporting their respective business activities to their best
extent. Currently used SLAs fail in providing such confidence, especially
when providers outsource resources to other providers. These resource providers typically support very simple metrics, or metrics that hinder an efficient exploitation of their resources.
In this paper, we propose a resource-level metric for specifying finegrain guarantees on CPU performance. This metric allows resource providers to allocate dynamically their resources among the running services
depending on their demand. This is accomplished by incorporating the customer’s CPU usage in the metric definition, but avoiding fake SLA violations when the customer’s task does not use all its allocated resources. As demonstrated in our evaluation, which has been conducted in a virtualized provider where we have implemented the needed infrastructure for using our metric, our solution presents fewer SLA violations than other CPU-related metrics.Peer Reviewe
Resource-level QoS metric for CPU-based guarantees in cloud providers
Success of Cloud computing requires that both customers and providers can be confident that signed Service Level Agreements (SLA) are supporting their respective business activities to their best
extent. Currently used SLAs fail in providing such confidence, especially
when providers outsource resources to other providers. These resource providers typically support very simple metrics, or metrics that hinder an efficient exploitation of their resources.
In this paper, we propose a resource-level metric for specifying finegrain guarantees on CPU performance. This metric allows resource providers to allocate dynamically their resources among the running services
depending on their demand. This is accomplished by incorporating the customer’s CPU usage in the metric definition, but avoiding fake SLA violations when the customer’s task does not use all its allocated resources. As demonstrated in our evaluation, which has been conducted in a virtualized provider where we have implemented the needed infrastructure for using our metric, our solution presents fewer SLA violations than other CPU-related metrics.Peer Reviewe
Energy-aware scheduling in virtualized datacenters
The reduction of energy consumption in large-scale datacenters is being accomplished through an extensive use of virtualization, which enables the consolidation of multiple workloads
in a smaller number of machines. Nevertheless, virtualization also
incurs some additional overheads (e.g. virtual machine creation and migration) that can influence what is the best consolidated
configuration, and thus, they must be taken into account. In this paper, we present a dynamic job scheduling policy for
power-aware resource allocation in a virtualized datacenter. Our policy tries to consolidate workloads from separate machines
into a smaller number of nodes, while fulfilling the amount of hardware resources needed to preserve the quality of service
of each job. This allows turning off the spare servers, thus reducing the overall datacenter power consumption. As a novelty,
this policy incorporates all the virtualization overheads in the decision process. In addition, our policy is prepared to consider other important parameters for a datacenter, such as reliability or dynamic SLA enforcement, in a synergistic way with power consumption. The introduced policy is evaluated comparing it against common policies in a simulated environment that
accurately models HPC jobs execution in a virtualized datacenter including power consumption modeling and obtains a power
consumption reduction of 15% with respect to typical policies.Peer Reviewe
Energy-efficient and multifaceted resource management for profit-driven virtualized data centers
As long as virtualization has been introduced in data centers, it has been opening new chances for resource management. Nowadays, it is not just used as a tool for consolidating underused nodes and save power; it also allows new solutions to well-known challenges, such as heterogeneity management. Virtualization helps to encapsulate Web-based applications or HPC jobs in virtual machines (VMs) and see them as a single entity which can be managed in an easier and more efficient way. We propose a new scheduling policy that models and manages a virtualized data center. It focuses
on the allocation of VMs in data center nodes according to multiple facets to optimize the provider’s profit. In particular, it considers energy efficiency, virtualization overheads, and SLA violation penalties, and supports the outsourcing to external providers. The proposed approach is compared to other common scheduling policies, demonstrating that a provider can improve its benefit by 30% and save power while handling other challenges, such as resource outsourcing, in a better and more intuitive way than other typical approaches do.Peer ReviewedPostprint (author’s final draft
Energy-efficient and multifaceted resource management for profit-driven virtualized data centers
As long as virtualization has been introduced in data centers, it has been opening new chances for resource management. Nowadays, it is not just used as a tool for consolidating underused nodes and save power; it also allows new solutions to well-known challenges, such as heterogeneity management. Virtualization helps to encapsulate Web-based applications or HPC jobs in virtual machines (VMs) and see them as a single entity which can be managed in an easier and more efficient way. We propose a new scheduling policy that models and manages a virtualized data center. It focuses
on the allocation of VMs in data center nodes according to multiple facets to optimize the provider’s profit. In particular, it considers energy efficiency, virtualization overheads, and SLA violation penalties, and supports the outsourcing to external providers. The proposed approach is compared to other common scheduling policies, demonstrating that a provider can improve its benefit by 30% and save power while handling other challenges, such as resource outsourcing, in a better and more intuitive way than other typical approaches do.Peer Reviewe