13,847 research outputs found
A Survey on Load Balancing Algorithms for VM Placement in Cloud Computing
The emergence of cloud computing based on virtualization technologies brings
huge opportunities to host virtual resource at low cost without the need of
owning any infrastructure. Virtualization technologies enable users to acquire,
configure and be charged on pay-per-use basis. However, Cloud data centers
mostly comprise heterogeneous commodity servers hosting multiple virtual
machines (VMs) with potential various specifications and fluctuating resource
usages, which may cause imbalanced resource utilization within servers that may
lead to performance degradation and service level agreements (SLAs) violations.
To achieve efficient scheduling, these challenges should be addressed and
solved by using load balancing strategies, which have been proved to be NP-hard
problem. From multiple perspectives, this work identifies the challenges and
analyzes existing algorithms for allocating VMs to PMs in infrastructure
Clouds, especially focuses on load balancing. A detailed classification
targeting load balancing algorithms for VM placement in cloud data centers is
investigated and the surveyed algorithms are classified according to the
classification. The goal of this paper is to provide a comprehensive and
comparative understanding of existing literature and aid researchers by
providing an insight for potential future enhancements.Comment: 22 Pages, 4 Figures, 4 Tables, in pres
A Multilevel I/O Tracer for Timing and Performance Analysis of Storage Systems in IaaS Cloud
REACTION 2014. 3rd International Workshop on Real-time and Distributed Computing in Emerging Applications. Rome, Italy. December 2nd, 2014.Data centers are more and more relying on hybrid storage systems consisting of flash memory based storage devices and traditional hard disk drives. Optimal data placement in such hybrid storage systems is a very important issue in the domain of cloud computing and virtualization. This is specially the case when users need that storage systems enforce Quality of Service requirements on I/Os performed, for example for multimedia applications. To characterize Virtual Machine (VM) I/O workload properties such as timing predictability or throughput, monitor-ing services are necessary on such new architectures. This article presents a multilevel I/O tracer for virtual machines that relies on and complement different state-of-the-art tools. It produces I/O traces at different levels of the Linux I/O software stack. The I/O tracer gives an exhaustive information that allows administrators to precisely characterize virtual machine I/O behavior in terms of percentage of read/write I/Os, percentage of random/sequential, I/O request inter-arrival time, etc. This tool is the first piece towards a middleware whose purpose is to meet user QoS requirements thanks to optimal data placement and migration policies in a hybrid storage system in the context of an IaaS Cloud.This work has been funded by the French government through the National Research Agency (ANR) Investment
referenced ANR-A0-AIRT-07Publicad
An Algorithm for Network and Data-aware Placement of Multi-Tier Applications in Cloud Data Centers
Today's Cloud applications are dominated by composite applications comprising
multiple computing and data components with strong communication correlations
among them. Although Cloud providers are deploying large number of computing
and storage devices to address the ever increasing demand for computing and
storage resources, network resource demands are emerging as one of the key
areas of performance bottleneck. This paper addresses network-aware placement
of virtual components (computing and data) of multi-tier applications in data
centers and formally defines the placement as an optimization problem. The
simultaneous placement of Virtual Machines and data blocks aims at reducing the
network overhead of the data center network infrastructure. A greedy heuristic
is proposed for the on-demand application components placement that localizes
network traffic in the data center interconnect. Such optimization helps
reducing communication overhead in upper layer network switches that will
eventually reduce the overall traffic volume across the data center. This, in
turn, will help reducing packet transmission delay, increasing network
performance, and minimizing the energy consumption of network components.
Experimental results demonstrate performance superiority of the proposed
algorithm over other approaches where it outperforms the state-of-the-art
network-aware application placement algorithm across all performance metrics by
reducing the average network cost up to 67% and network usage at core switches
up to 84%, as well as increasing the average number of application deployments
up to 18%.Comment: Submitted for publication consideration for the Journal of Network
and Computer Applications (JNCA). Total page: 28. Number of figures: 15
figure
- …