5 research outputs found
Exploiting user provided information in dynamic consolidation of virtual machines to minimize energy consumption of cloud data centers
Dynamic consolidation of Virtual Machines (VMs) can effectively enhance the resource utilization and energy-efficiency of the Cloud Data Centers (CDC). Existing research on Cloud resource reservation and scheduling signify that Cloud Service Users (CSUs) can play a crucial role in improving the resource utilization by providing valuable information to Cloud service providers. However, utilization of CSUs' provided information in minimization of energy consumption of CDC is a novel research direction. The challenges herein are twofold. First, finding the right benign information to be received from a CSU which can complement the energy-efficiency of CDC. Second, smart application of such information to significantly reduce the energy consumption of CDC. To address those research challenges, we have proposed a novel heuristic Dynamic VM Consolidation algorithm, RTDVMC, which minimizes the energy consumption of CDC through exploiting CSU provided information. Our research exemplifies the fact that if VMs are dynamically consolidated based on the time when a VM can be removed from CDC-a useful information to be received from respective CSU, then more physical machines can be turned into sleep state, yielding lower energy consumption. We have simulated the performance of RTDVMC with real Cloud workload traces originated from more than 800 PlanetLab VMs. The empirical figures affirm the superiority of RTDVMC over existing prominent Static and Adaptive Threshold based DVMC algorithms
CoLocateMe: Aggregation-based, energy, performance and cost aware VM placement and consolidation in heterogeneous IaaS clouds
In many production clouds, with the notable exception of Google, aggregation-based VM placement policies are used to provision datacenter resources energy and performance efficiently. However, if VMs with similar workloads are placed onto the same machines, they might suffer from contention, particularly, if they are competing for similar resources. High levels of resource contention may degrade VMs performance, and, therefore, could potentially increase users’ costs and infrastructure's energy consumption. Furthermore, segregation-based methods result in stranded resources and, therefore, less economics. The recent industrial interest in segregating workloads opens new directions for research. In this article, we demonstrate how aggregation and segregation-based VM placement policies lead to variabilities in energy efficiency, workload performance, and users’ costs. We, then, propose various approaches to aggregation-based placement and migration. We investigate through a number of experiments, using Microsoft Azure and Google's workload traces for more than twelve thousand hosts and a million VMs, the impact of placement decisions on energy, performance, and costs. Our extensive simulations and empirical evaluation demonstrate that, for certain workloads, aggregation-based allocation and consolidation is ∼9.61% more energy and ∼20.0% more performance efficient than segregation-based policies. Moreover, various aggregation metrics, such as runtimes and workload types, offer variations in energy consumption and performance, therefore, users’ costs
Internet of Things From Hype to Reality
The Internet of Things (IoT) has gained significant mindshare, let alone attention, in academia and the industry especially over the past few years. The reasons behind this interest are the potential capabilities that IoT promises to offer. On the personal level, it paints a picture of a future world where all the things in our ambient environment are connected to the Internet and seamlessly communicate with each other to operate intelligently. The ultimate goal is to enable objects around us to efficiently sense our surroundings, inexpensively communicate, and ultimately create a better environment for us: one where everyday objects act based on what we need and like without explicit instructions
Recommended from our members
Source-Routed Multicast Schemes for Large-Scale Cloud Data Center Networks
Data centers (DCs) have been witnessing unprecedented growth in size, number and complexity in recent years. They consist of tens of thousands of servers interconnected by fast network switches, hosting and enabling numerous applications with various traffic characteristics and requirements. As a result, DC networks have been presented with several unique challenges, pertaining to the scaling and allocation of network resources during the forwarding and moving of data across the different DC servers. Traffic routing in general and multicast routing in particular are important functions in DC networks, especially that modern cloud DCs tend to exhibit one-to-many communication traffic patterns. Unfortunately, recent multicast routing approaches that adopt IP multicast suffer from scalability and load balancing issues, and do not scale well with the number of supported multicast groups when used for cloud DC networks. In this thesis, we propose a set of new, complementary schemes that overcome these challenges. More specifically, firstly, we study existing DC network topologies, and propose Circulant Fat-Tree topology, an improvement over the traditional Fat-Tree topology with better properties to suit nowadays DC networks. Then, we review and classify recent studies that investigate and measure the traffic behavior of operational DC networks. We focus on the way they collect the traffic as well as on the key findings made in these studies.
Secondly, we propose Bert, a source-initiated multicast routing scheme for DCs. Bert scales well with both the number and the size of multicast groups, and does so through clustering, by dividing the members of the multicast group into a set of clusters with each cluster employing its own forwarding rules. In essence, Bert yields much lesser multicast traffic overhead than state-of-the-art schemes.
Thirdly, we propose, Ernie, a scalable and load-balanced multicast source routing scheme. Ernie introduces a novel method for scaling out the number of supported mul- ticast groups. In particular, it appropriately constructs and organizes multicast header information inside packets in a manner that allows core/root switches to only forward down the needed information. Ernie also introduces an effective multicast traffic load balancing technique across downstream links. Specifically, it prudently assigns multicast groups to core switches to ensure the evenness of load distribution across the downstream links
Release-time aware VM placement
Consolidating virtual machines (VMs) on as few physical machines (PMs) as possible so as to turn into sleep as many PMs as possible can make significant energy savings in cloud centers. Although traditional online bin packing heuristics, such as Best Fit (BF), have been used to reduce the number of active PMs, they share one common limitation; they do not account for VM release times, which can lead to an inefficient usage of energy resources. In this paper, we propose several extensions to the original BF heuristic by accounting for VMs' release times when making VM placement decisions. Our comparative studies conducted on Google traces show that, when compared to existing heuristics, the proposed heuristic reduces energy consumption and enhances utilization of cloud servers.Cisco (CG-573228) and National Science Foundation (CAREER award CNS-0846044).Scopu