27,616 research outputs found
Characterizing the power cost of virtualization environments
Virtualization is a key building block of next-generation mobile networks. It
can be implemented through twomain approaches: traditional virtual machines
(VMs) and lighter-weight containers. Our objective in this paper is to compare
these approaches and study the power consumption they are associated
with. To this end, we perform a large set of real-world measurements, using
both synthetic workloads and real-world applications, and use them to model
the relationship between the resource usage of the hosted application and the
power consumption of both VMs and containers hosting it. We find that containers
incur substantially lower power consumption than VMs and that such
consumption increases more slowly with the application load.This work is supported by the EuropeanCommission through the H2020 5G-TRANSFORMERproject (Project ID 761536
Recommended from our members
Strategies for Successfully Implementing a Virtualization Project: A Case with Vmware
Virtualization has become one of the hottest information technologies in the past few years. Yet, despite the proclaimed cost savings and efficiency improvement, implementation of the virtualization involves high degree of uncertainty, and consequently a great possibility of failures. Experience from managing the VMware based project activities at several companies are reported as the examples to illustrate how to increase the chance of successfully implementing a virtualization project
The Cost of Virtualization for Scientific Computing
Selle töö eesmärk on uurida riistvara virtualiseerimise negatiivseid aspekte, kui paigaldatakse rakendusi pilve, mõõtes selle mõju täpselt teaduslike paralleelarvutus algoritmidega.
Virtualiseerimine annab pilveteenustele mitmeid eeliseid nagu seadistamise lihtsus, riistvara ja tarkvara lahtisidestus, väga kiire paigaldus ja konfiguratsiooni muutus ning elastsus. Kuid lisa virtualisatsioonikiht võib endaga kaasa tuua mitmeid puuduseid. Eriti ressursi nõudlikele teaduslikele algoritmidele, mis rakendavad paralleelarvutus tehnoloogiaid.
Selleks kasutame NASA välja töötatud spetsiaalset tarkvara paralleelsete süsteemide jõudlustestimiseks – „NAS Parallel Benchmarking“, mis töötab MPI tehnoloogial. Virtualiseerimiseks kasutada XEN ja KVM vabavaralist virtualiseerimistarkvara ning operatsioonisüsteemiks kasutada samuti vabavaralist „Ubuntu Linuxit“.
Laiame, et lihtsalt virtualisatsioonikihti lisades ei ole arvutusvõimsusele erilist mõju, küll aga olenevalt arvutite arvust, võib virtualiseerimine oluliselt mõjutada kõvaketta operatsioonide kiirust ja samuti on tuntav mõju võrgulatentsusele. Kokkuvõttes töötab KVM peaaegu kõikides jõudlustestides paremini kui Xen.The goal of this thesis is to research what is the downside of using infrastructure virtualization for deploying applications on the cloud and to accurately measure its effect on parallel scientific computing algorithms.
Virtualization provides numerous benefits for clouds such as ease of configuring, de-coupling machine and software stack, rapid deployment and configuration changes, elasticity etc. However additional virtualization layer introduces several disadvantages. Especially for resource demanding scientific algorithms that utilize parallel computing techniques.
For this we deploy benchmarking algorithms designed to test distributed computing on different platforms such as NASA Parallel Benchmarking software. For virtualization we use XEN and KVM and for operating system we use Ubuntu.
This thesis concludes that just by adding a virtualization layer, the computing power is not affected but depending on the number of machines the impact on the disk operations might be severe. Also networking capabilities are reduced for virtual machines. All in all KVM is better than Xen in almost all of the benchmarks
Pre-Virtualization: Slashing the cost of virtualization
Despite its current popularity, para-virtualization has an
enormous cost. Its
diversion from the platform architecture abandons many of the
benefits that come
with pure virtualization (the faithful emulation of the platform
API): stable and
well-defined platform interfaces, single binaries for kernel and
device drivers (and
thus lower testing, maintenance, and support cost), and vendor
independence.
These limitations are accepted as inevitable for significantly
better performance
and the ability to provide virtualization-like behavior on
non-virtualizable
hardware, such as x86.
We argue that the above limitations are not inevitable, and
present pre-
virtualization, which preserves the benefits of full
virtualization without sacrificing
the performance benefits of para-virtualization. In a
semi-automatic step an OS is
prepared for virtualization. The required modifications are
orders of magnitudes
smaller than for para-virtualization. A virtualization module,
that is collocated with
the guest OS, transforms the standard platform API into the
respective hypervisor
API. The guest OS is still programmed against a common
architecture, and the
binary remains fully functional on bare hardware. The support of
a new hypervisor
or updated interface only requires the implementation of a
single interface
mapping. We validated our approach for a variety of hypervisors,
on two very
different hardware platforms (x86 and Itanium), with multiple
generations of Linux
as guests. We found that pre-virtualization achieves essentially
the same
performance as para-virtualization, at a fraction of the
engineering cost
- …