1,205 research outputs found

    Evaluation of type-1 hypervisors on desktop-class virtualization hosts

    Get PDF
    System Virtualization has become a fundamental IT tool, whether it is type-2/hosted virtualization, mostly exploited by end-users in their personal computers, or type-1/bare metal, well established in IT departments and thoroughly used in modern datacenters as the very foundation of cloud computing. Though bare metal virtualization is meant to be deployed on server-grade hardware (for performance, stability and reliability reasons), properly configured desktop-class systems or workstations are often used as virtualization servers, due to their attractive performance/cost ratio. This paper presents the results of a study conducted on commodity virtualization servers, aiming to assess the performance of a representative set of the type-1 platforms mostly in use today: VMware ESXi, Citrix XenServer, Microsoft Hyper-V, oVirt and Proxmox. Hypervisor performance is indirectly measured through synthetic benchmarks performed on Windows 10 LTSB and Linux Ubuntu Server 16.04 guests: PassMark for Windows, UnixBench for Linux, and the cross-platform Flexible I/O Tester and iPerf3 benchmarks. The evaluation results may be used to guide the choice of the best type-1 platform (performance-wise), depending on the predominant guest OS, the performance patterns (CPUbound, IO-bound, or balanced) of that OS, its storage type (local/remote) and the required network-level performance.info:eu-repo/semantics/publishedVersio

    Benchmarking the performance of hypervisors on different workloads

    Get PDF
    Many organizations rely on a heterogeneous set of applications in virtual environment to deliver critical services to their customers. Different workloads utilize system resources at different levels. Depending on the resource utilization pattern some workloads may be better suited for hosting on a virtual platform. This paper discusses a novel framework for benchmarking the performance of Oracle database workloads such as Online Analytical Processing (OLAP), Online Transaction Processing (OLTP), Web load and Email on two different hypervisors. Further, Design of Experiments (DoE) is used to identify the significance of input parameters, and their overall effect on two hypervisors, which provides a quantitative and qualitative comparative analysis to customers with high degree of accuracy to choose the right hypervisor for their workload in datacenters

    On the Fly Orchestration of Unikernels: Tuning and Performance Evaluation of Virtual Infrastructure Managers

    Full text link
    Network operators are facing significant challenges meeting the demand for more bandwidth, agile infrastructures, innovative services, while keeping costs low. Network Functions Virtualization (NFV) and Cloud Computing are emerging as key trends of 5G network architectures, providing flexibility, fast instantiation times, support of Commercial Off The Shelf hardware and significant cost savings. NFV leverages Cloud Computing principles to move the data-plane network functions from expensive, closed and proprietary hardware to the so-called Virtual Network Functions (VNFs). In this paper we deal with the management of virtual computing resources (Unikernels) for the execution of VNFs. This functionality is performed by the Virtual Infrastructure Manager (VIM) in the NFV MANagement and Orchestration (MANO) reference architecture. We discuss the instantiation process of virtual resources and propose a generic reference model, starting from the analysis of three open source VIMs, namely OpenStack, Nomad and OpenVIM. We improve the aforementioned VIMs introducing the support for special-purpose Unikernels and aiming at reducing the duration of the instantiation process. We evaluate some performance aspects of the VIMs, considering both stock and tuned versions. The VIM extensions and performance evaluation tools are available under a liberal open source licence

    Benchmarking of bare metal virtualization platforms on commodity hardware

    Get PDF
    In recent years, System Virtualization became a fundamental IT tool, whether it is type-2/hosted virtualization, mostly exploited by end-users in their personal computers, or type-1/bare metal, well established in IT departments and thoroughly used in modern datacenters as the very foundation of cloud computing. Though bare metal virtualization is meant to be deployed on server-grade hardware (for performance, stability and reliability reasons), properly configured desktop-class systems are often used as virtualization “servers”, due to their attractive performance/cost ratio. This paper presents the results of a study conducted on such systems, about the performance of Windows 10 and Ubuntu Server 16.04 guests, when deployed in what we believe are the type-1 platforms most in use today: VMware ESXi, Citrix XenServer, Microsoft Hyper-V, and KVM-based (represented by oVirt and Proxmox). Performance is measured using three synthetic benchmarks: PassMark for Windows, UnixBench for Ubuntu Server, and the cross-platform Flexible I/O Tester. The benchmarks results may be used to choose the most adequate type-1 platform (performance-wise), depending on guest OS, its performance requisites (CPU-bound, IO-bound, or balanced) and its storage type (local/remote) used.info:eu-repo/semantics/publishedVersio

    Maximizing hypervisor scalability using minimal virtual machines

    Get PDF
    The smallest instance offered by Amazon EC2 comes with 615MB memory and a 7.9GB disk image. While small by today's standards, embedded web servers with memory footprints well under 100kB, indicate that there is much to be saved. In this work we investigate how large VM-populations the open Stack hyper visor can be made to sustain, by tuning it for scalability and minimizing virtual machine images. Request-driven Qemu images of 512 byte are written in assembly, and more than 110 000 such instances are successfully booted on a 48 core host, before memory is exhausted. Other factors are shown to dramatically improve scalability, to the point where 10 000 virtual machines consume no more than 2.06% of the hyper visor CPU
    • …
    corecore