21,403 research outputs found

    Hardware Virtualization Applied to Rootkit Defense

    Get PDF
    This research effort examines the idea of applying virtualization hardware to enhance operating system security against rootkits. Rootkits are sets of tools used to hide code and/or functionality from the user and operating system. Rootkits can accomplish this feat through using access to one part of an operating system to change another part that resides at the same privilege level. Hardware assisted virtualization (HAV) provides an opportunity to defeat this tactic through the introduction of a new operating mode. Created to aid operating system virtualization, HAV provides hardware support for managing and saving multiple states of the processor. This hardware support overcomes a problem in pure software virtualization, which is the need to modify guest software to run at a less privileged level. Using HAV, guest software can operate at the pre-HAV most privileged level. This thesis provides a plan to protect data structures targeted by rootkits through unconventional use of HAV technology to secure system resources such as memory. This method of protection will provide true real-time security through OS attack prevention, rather than reaction

    High-performance and Scalable Software-based NVMe Virtualization Mechanism with I/O Queues Passthrough

    Full text link
    NVMe(Non-Volatile Memory Express) is an industry standard for solid-state drives (SSDs) that has been widely adopted in data centers. NVMe virtualization is crucial in cloud computing as it allows for virtualized NVMe devices to be used by virtual machines (VMs), thereby improving the utilization of storage resources. However, traditional software-based solutions have flexibility benefits but often come at the cost of performance degradation or high CPU overhead. On the other hand, hardware-assisted solutions offer high performance and low CPU usage, but their adoption is often limited by the need for special hardware support or the requirement for new hardware development. In this paper, we propose LightIOV, a novel software-based NVMe virtualization mechanism that achieves high performance and scalability without consuming valuable CPU resources and without requiring special hardware support. LightIOV can support thousands of VMs on each server. The key idea behind LightIOV is NVMe hardware I/O queues passthrough, which enables VMs to directly access I/O queues of NVMe devices, thus eliminating virtualization overhead and providing near-native performance. Results from our experiments show that LightIOV can provide comparable performance to VFIO, with an IOPS of 97.6%-100.2% of VFIO. Furthermore, in high-density VMs environments, LightIOV achieves 31.4% lower latency than SPDK-Vhost when running 200 VMs, and an improvement of 27.1% in OPS performance in real-world applications

    CVA6 RISC-V Virtualization: Architecture, Microarchitecture, and Design Space Exploration

    Full text link
    Virtualization is a key technology used in a wide range of applications, from cloud computing to embedded systems. Over the last few years, mainstream computer architectures were extended with hardware virtualization support, giving rise to a set of virtualization technologies (e.g., Intel VT, Arm VE) that are now proliferating in modern processors and SoCs. In this article, we describe our work on hardware virtualization support in the RISC-V CVA6 core. Our contribution is multifold and encompasses architecture, microarchitecture, and design space exploration. In particular, we highlight the design of a set of microarchitectural enhancements (i.e., G-Stage Translation Lookaside Buffer (GTLB), L2 TLB) to alleviate the virtualization performance overhead. We also perform a Design Space Exploration (DSE) and accompanying post-layout simulations (based on 22nm FDX technology) to assess Performance, Power ,and Area (PPA). Further, we map design variants on an FPGA platform (Genesys 2) to assess the functional performance-area trade-off. Based on the DSE, we select an optimal design point for the CVA6 with hardware virtualization support. For this optimal hardware configuration, we collected functional performance results by running the MiBench benchmark on Linux atop Bao hypervisor for a single-core configuration. We observed a performance speedup of up to 16% (approx. 12.5% on average) compared with virtualization-aware non-optimized design at the minimal cost of 0.78% in area and 0.33% in power. Finally, all work described in this article is publicly available and open-sourced for the community to further evaluate additional design configurations and software stacks

    Resource Allocation Policy for Virtualized Network Interfaces

    Get PDF
    Over the last decade, virtualization has gained widespread importance. Virtual Machines (VMs) can now share network access in hardware, or in software or in a hybridized way. Input/Output (IO) virtualization technologies based on software utilize emulation technique, but this requires Virtualization Manager which presents central processing overhead in a significant amount. Besides, each IO operation in turn poses overhead additionally and any supported advanced capabilities inherent of physical hardware are not utilized properly. Some direct assignment based IO virtualization technologies suffer from limitations to scalability. The support for Quality of Service (QoS) may be offered within the software layers at the Virtualization Manager or Guest Operating System level which interact with the IO device that is being shared. With a preliminary investigation of the functionality of the RiceNIC (an open standard platform meant for research and education into concurrent network interface design), a study of the various network interface technologies supporting IO device virtualization was carried out to precisely understand IO virtualized network interfaces. The project describes a resource allocation policy for the on-device memory of the IO device being shared, taking the instance of a complex IO device, i.e., a Network Interface Controller(NIC) supporting a reconfigurable virtualized network interface architecture design which endures multiple reconfigurable virtualized network interfaces working independently using a reconfigurable partitioned memory. It enhances the scalability of the IO device

    A Performance Comparison of Hypervisors for Cloud Computing

    Get PDF
    The virtualization of IT infrastructure enables the consolidation and pooling of IT resources so that they can be shared over diverse applications to offset the limitation of shrinking resources and growing business needs. Virtualization provides a logical abstraction of physical computing resources and creates computing environments that are not restricted by physical configuration or implementation. Virtualization is very important for cloud computing because the delivery of services is simplified by providing a platform for optimizing complex IT resources in a scalable manner, which makes cloud computing more cost effective. Hypervisor plays an important role in the virtualization of hardware. It is a piece of software that provides a virtualized hardware environment to support running multiple operating systems concurrently using one physical server. Cloud computing has to support multiple operating environments and Hypervisor is the ideal delivery mechanism. The intent of this thesis is to quantitatively and qualitatively compare the performance of VMware ESXi 4.1, Citrix Systems Xen Server 5.6 and Ubuntu 11.04 Server KVM Hypervisors using standard benchmark SPECvirt_sc2010v1.01 formulated by Standard Performance Evaluation Corporation (SPEC) under various workloads simulating real life situations
    corecore