2,130 research outputs found
Toward sustainable data centers: a comprehensive energy management strategy
Data centers are major contributors to the emission of carbon dioxide to the atmosphere, and this contribution is expected to increase in the following years. This has encouraged the development of techniques to reduce the energy consumption and the environmental footprint of data centers. Whereas some of these techniques have succeeded to reduce the energy consumption of the hardware equipment of data centers (including IT, cooling, and power supply systems), we claim that sustainable data centers will be only possible if the problem is faced by means of a holistic approach that includes not only the aforementioned techniques but also intelligent and unifying solutions that enable a synergistic and energy-aware management of data centers.
In this paper, we propose a comprehensive strategy to reduce the carbon footprint of data centers that uses the energy as a driver of their management procedures. In addition, we present a holistic management architecture for sustainable data centers that implements the aforementioned strategy, and we propose design guidelines to accomplish each step of the proposed strategy, referring to related achievements and enumerating the main challenges that must be still solved.Peer ReviewedPostprint (author's final draft
Allocation of Virtual Machines in Cloud Data Centers - A Survey of Problem Models and Optimization Algorithms
Data centers in public, private, and hybrid cloud settings make it possible to provision virtual machines
(VMs) with unprecedented flexibility. However, purchasing, operating, and maintaining the underlying physical
resources incurs significant monetary costs and also environmental impact. Therefore, cloud providers must
optimize the usage of physical resources by a careful allocation of VMs to hosts, continuously balancing between
the conflicting requirements on performance and operational costs. In recent years, several algorithms have been
proposed for this important optimization problem. Unfortunately, the proposed approaches are hardly comparable
because of subtle differences in the used problem models. This paper surveys the used problem formulations and
optimization algorithms, highlighting their strengths and limitations, also pointing out the areas that need further
research in the future
Performance-oriented Cloud Provisioning: Taxonomy and Survey
Cloud computing is being viewed as the technology of today and the future.
Through this paradigm, the customers gain access to shared computing resources
located in remote data centers that are hosted by cloud providers (CP). This
technology allows for provisioning of various resources such as virtual
machines (VM), physical machines, processors, memory, network, storage and
software as per the needs of customers. Application providers (AP), who are
customers of the CP, deploy applications on the cloud infrastructure and then
these applications are used by the end-users. To meet the fluctuating
application workload demands, dynamic provisioning is essential and this
article provides a detailed literature survey of dynamic provisioning within
cloud systems with focus on application performance. The well-known types of
provisioning and the associated problems are clearly and pictorially explained
and the provisioning terminology is clarified. A very detailed and general
cloud provisioning classification is presented, which views provisioning from
different perspectives, aiding in understanding the process inside-out. Cloud
dynamic provisioning is explained by considering resources, stakeholders,
techniques, technologies, algorithms, problems, goals and more.Comment: 14 pages, 3 figures, 3 table
Energy and Performance: Management of Virtual Machines: Provisioning, Placement, and Consolidation
Cloud computing is a new computing paradigm that offers scalable storage and compute resources to users on demand through Internet. Public cloud providers operate large-scale data centers around the world to handle a large number of users request. However, data centers consume an immense amount of electrical energy that can lead to high operating costs and carbon emissions. One of the most common and effective method in order to reduce energy consumption is Dynamic Virtual Machines Consolidation (DVMC) enabled by the virtualization technology. DVMC dynamically consolidates Virtual Machines (VMs) into the minimum number of active servers and then switches the idle servers into a power-saving mode to save energy. However, maintaining the desired level of Quality-of-Service (QoS) between data centers and their users is critical for satisfying users’ expectations concerning performance. Therefore, the main challenge is to minimize the data center energy consumption while maintaining the required QoS.
This thesis address this challenge by presenting novel DVMC approaches to reduce the energy consumption of data centers and improve resource utilization under workload independent quality of service constraints. These approaches can be divided into three main categories: heuristic, meta-heuristic and machine learning.
Our first contribution is a heuristic algorithm for solving the DVMC problem. The algorithm uses a linear regression-based prediction model to detect over-loaded servers based on the historical utilization data. Then it migrates some VMs from the over-loaded servers to avoid further performance degradations. Moreover, our algorithm consolidates VMs on fewer number of server for energy saving. The second and third contributions are two novel DVMC algorithms based on the Reinforcement Learning (RL) approach. RL is interesting for highly adaptive and autonomous management in dynamic environments. For this reason, we use RL to solve two main sub-problems in VM consolidation. The first sub-problem is the server power mode detection (sleep or active). The second sub-problem is to find an effective solution for server status detection (overloaded or non-overloaded). The fourth contribution of this thesis is an online optimization meta-heuristic algorithm called Ant Colony System-based Placement Optimization (ACS-PO). ACS is a suitable approach for VM consolidation due to the ease of parallelization, that it is close to the optimal solution, and its polynomial worst-case time complexity. The simulation results show that ACS-PO provides substantial improvement over other heuristic algorithms in reducing energy consumption, the number of VM migrations, and performance degradations.
Our fifth contribution is a Hierarchical VM management (HiVM) architecture based on a three-tier data center topology which is very common use in data centers. HiVM has the ability to scale across many thousands of servers with energy efficiency. Our sixth contribution is a Utilization Prediction-aware Best Fit Decreasing (UP-BFD) algorithm. UP-BFD can avoid SLA violations and needless migrations by taking into consideration the current and predicted future resource requirements for allocation, consolidation, and placement of VMs.
Finally, the seventh and the last contribution is a novel Self-Adaptive Resource Management System (SARMS) in data centers. To achieve scalability, SARMS uses a hierarchical architecture that is partially inspired from HiVM. Moreover, SARMS provides self-adaptive ability for resource management by dynamically adjusting the utilization thresholds for each server in data centers.Siirretty Doriast
Empirical Potential Function for Simplified Protein Models: Combining Contact and Local Sequence-Structure Descriptors
An effective potential function is critical for protein structure prediction
and folding simulation. Simplified protein models such as those requiring only
or backbone atoms are attractive because they enable efficient
search of the conformational space. We show residue specific reduced discrete
state models can represent the backbone conformations of proteins with small
RMSD values. However, no potential functions exist that are designed for such
simplified protein models. In this study, we develop optimal potential
functions by combining contact interaction descriptors and local
sequence-structure descriptors. The form of the potential function is a
weighted linear sum of all descriptors, and the optimal weight coefficients are
obtained through optimization using both native and decoy structures. The
performance of the potential function in test of discriminating native protein
structures from decoys is evaluated using several benchmark decoy sets. Our
potential function requiring only backbone atoms or atoms have
comparable or better performance than several residue-based potential functions
that require additional coordinates of side chain centers or coordinates of all
side chain atoms. By reducing the residue alphabets down to size 5 for local
structure-sequence relationship, the performance of the potential function can
be further improved. Our results also suggest that local sequence-structure
correlation may play important role in reducing the entropic cost of protein
folding.Comment: 20 pages, 5 figures, 4 tables. In press, Protein
Hindsight Learning for MDPs with Exogenous Inputs
Many resource management problems require sequential decision-making under
uncertainty, where the only uncertainty affecting the decision outcomes are
exogenous variables outside the control of the decision-maker. We model these
problems as Exo-MDPs (Markov Decision Processes with Exogenous Inputs) and
design a class of data-efficient algorithms for them termed Hindsight Learning
(HL). Our HL algorithms achieve data efficiency by leveraging a key insight:
having samples of the exogenous variables, past decisions can be revisited in
hindsight to infer counterfactual consequences that can accelerate policy
improvements. We compare HL against classic baselines in the multi-secretary
and airline revenue management problems. We also scale our algorithms to a
business-critical cloud resource management problem -- allocating Virtual
Machines (VMs) to physical machines, and simulate their performance with real
datasets from a large public cloud provider. We find that HL algorithms
outperform domain-specific heuristics, as well as state-of-the-art
reinforcement learning methods.Comment: 53 pages, 6 figure
Holistic resource allocation for multicore real-time systems
This paper presents CaM, a holistic cache and memory bandwidth resource allocation strategy for multicore real-time systems. CaM is designed for partitioned scheduling, where tasks are mapped onto cores, and the shared cache and memory bandwidth resources are partitioned among cores to reduce resource interferences due to concurrent accesses. Based on our extension of LITMUSRT with Intel’s Cache Allocation Technology and MemGuard, we present an experimental evaluation of the relationship between the allocation of cache and memory bandwidth resources and a task’s WCET. Our resource allocation strategy exploits this relationship to map tasks onto cores, and to compute the resource allocation for each core. By grouping tasks with similar characteristics (in terms of resource demands) to the same core, it enables tasks on each core to fully utilize the assigned resources. In addition, based on the tasks’ execution time behaviors with respect to their assigned resources, we can determine a desirable allocation that maximizes schedulability under resource constraints. Extensive evaluations using real-world benchmarks show that CaM offers near optimal schedulability performance while being highly efficient, and that it substantially outperforms existing solutions
Recommended from our members
Resolving data center power bill disputes: the energy-performance trade-offs of consolidation
This is the author accepted manuscript. The final version is available from ACM via http://dx.doi.org/10.1145/2768510.2770933In this paper we challenge the common evaluation practices used for Virtual Machine (VM) consolidation, such as simulation and small testbeds, which fail to capture the fundamental trade-off between energy consumption and performance. We identify a number of over-simplifying assumptions which are typically made about the energy consumption and performance characteristics of modern networked systems. In response, we describe how more accurate models for data-center systems can be designed and used in order to create an evaluation framework that allows the more reliable exploration of the energy-performance trade-off for VM consolidation strategies.This work was jointly supported by by MINECO (grant TEC2014- 55713-R), the EPSRC INTERNET Project EP / H040536/1, and the Defense Advanced Research Projects Agency (DARPA) and the Air Force Research Laboratory (AFRL), under contract FA8750-11-C-0249. The views, opinions, and/or findings contained in this article/presentation are those of the author/presenter and should not be interpreted as representing the official views or policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the Department of Defense
- …