641 research outputs found

    Understanding and Leveraging Virtualization Technology in Commodity Computing Systems

    Get PDF
    Commodity computing platforms are imperfect, requiring various enhancements for performance and security purposes. In the past decade, virtualization technology has emerged as a promising trend for commodity computing platforms, ushering many opportunities to optimize the allocation of hardware resources. However, many abstractions offered by virtualization not only make enhancements more challenging, but also complicate the proper understanding of virtualized systems. The current understanding and analysis of these abstractions are far from being satisfactory. This dissertation aims to tackle this problem from a holistic view, by systematically studying the system behaviors. The focus of our work lies in performance implication and security vulnerabilities of a virtualized system.;We start with the first abstraction---an intensive memory multiplexing for I/O of Virtual Machines (VMs)---and present a new technique, called Batmem, to effectively reduce the memory multiplexing overhead of VMs and emulated devices by optimizing the operations of the conventional emulated Memory Mapped I/O in hypervisors. Then we analyze another particular abstraction---a nested file system---and attempt to both quantify and understand the crucial aspects of performance in a variety of settings. Our investigation demonstrates that the choice of a file system at both the guest and hypervisor levels has significant impact upon I/O performance.;Finally, leveraging utilities to manage VM disk images, we present a new patch management framework, called Shadow Patching, to achieve effective software updates. This framework allows system administrators to still take the offline patching approach but retain most of the benefits of live patching by using commonly available virtualization techniques. to demonstrate the effectiveness of the approach, we conduct a series of experiments applying a wide variety of software patches. Our results show that our framework incurs only small overhead in running systems, but can significantly reduce maintenance window

    Detecting Hardware-assisted Hypervisor Rootkits within Nested Virtualized Environments

    Get PDF
    Virtual machine introspection (VMI) is intended to provide a secure and trusted platform from which forensic information can be gathered about the true behavior of malware within a guest. However, it is possible for malware to escape a guest into the host and for hypervisor rootkits, such as BluePill, to stealthily transition a native OS into a virtualized environment. This research examines the effectiveness of selected detection mechanisms against hardware-assisted virtualization rootkits (HAV-R) within a nested virtualized environment. It presents the design, implementation, analysis, and evaluation of a hypervisor rootkit detection system which exploits both processor and translation lookaside buffer-based mechanisms to detect hypervisor rootkits within a variety of nested virtualized systems. It evaluates the effects of different types of virtualization on hypervisor rootkit detection and explores the effectiveness in-guest HAV-R obfuscation efforts. The results provide convincing evidence that the HAV-Rs are detectable in all SVMI scenarios examined, regardless of HAV-R or virtualization type. Also, that the selected detection techniques are effective at detection of HAV-R within nested virtualized environments, and that the type of virtualization implemented in a VMI system has minimal to no effect on HAV-R detection. Finally, it is determined that in-guest obfuscation does not successfully obfuscate the existence of HAV-R

    Virtual Machine Workloads: The Case for New NAS Benchmarks

    Get PDF
    Network Attached Storage (NAS) and Virtual Machines (VMs) are widely used in data centers thanks to their manageability, scalability, and ability to consolidate resources. But the shift from physical to virtual clients drastically changes the I/O workloads to seen on NAS servers, due to guest file system encapsulation in virtual disk images and the multiplexing of request streams from different VMs. Unfortunately, current NAS workload generators and benchmarks produce workloads typical to physical machines. This paper makes two contributions. First, we studied the extent to which virtualization is changing existing NAS workloads. We observed significant changes, including the disappearance of file system meta-data operations at the NAS layer, changed I/O sizes, and increased randomness. Second, we created a set of versatile NAS benchmarks to synthesize virtualized workloads. This allows us to generate accurate virtualized workloads without the effort and limitations associated with setting up a full virtualized environment. Our experiments demonstrate that relative error of our virtualized benchmarks, evaluated across 11 parameters, averages less than 10%

    Migration of Multi-Tier Applications to Infrastructure-As-A-Service Clouds: An Investigation Using Kernel-Based Virtual Machines

    Get PDF
    To investigate challenges of multi -tier application migration to Infrastructure -as-a- Service (IaaS) clouds we performed an experimental investigation by deploying a processor bound and input -output bound variant of the RUSLE2 erosion model to an IaaS base d private cloud. Scaling the applications to achieve optimal system throughput is complex and involves much more than simply increasing the number of allotted virtual machines (VMs). While scaling the application variants a series of bottlenecks were encountered unique to an application\u27s processing, I/O, and memory requirements, herein referred to as an application\u27s profile. To investigate the impact of provisioning variation for hosting multi -tier applications we tested four schemes of VM deployments across the physical nodes of our cloud. Performance degradation was more pronounced when multiple I/O or CPU resource intensive application components were co -located on the same physical hardware. We investigated the virtualization overhead incurred using Kernel -based virtual machines (KVM) by deploying our application variants to both physical and virtual machines. Overhead varied based on the unique characteristics of each application\u27s profile. We observed ~112% overhead for the input/output bound application and just ~ 10 % overhead for the processor bound application. Understanding an application\u27s profile was found to be important for optimal IaaS -based cloud migration and scaling

    Victima: Drastically Increasing Address Translation Reach by Leveraging Underutilized Cache Resources

    Full text link
    Address translation is a performance bottleneck in data-intensive workloads due to large datasets and irregular access patterns that lead to frequent high-latency page table walks (PTWs). PTWs can be reduced by using (i) large hardware TLBs or (ii) large software-managed TLBs. Unfortunately, both solutions have significant drawbacks: increased access latency, power and area (for hardware TLBs), and costly memory accesses, the need for large contiguous memory blocks, and complex OS modifications (for software-managed TLBs). We present Victima, a new software-transparent mechanism that drastically increases the translation reach of the processor by leveraging the underutilized resources of the cache hierarchy. The key idea of Victima is to repurpose L2 cache blocks to store clusters of TLB entries, thereby providing an additional low-latency and high-capacity component that backs up the last-level TLB and thus reduces PTWs. Victima has two main components. First, a PTW cost predictor (PTW-CP) identifies costly-to-translate addresses based on the frequency and cost of the PTWs they lead to. Second, a TLB-aware cache replacement policy prioritizes keeping TLB entries in the cache hierarchy by considering (i) the translation pressure (e.g., last-level TLB miss rate) and (ii) the reuse characteristics of the TLB entries. Our evaluation results show that in native (virtualized) execution environments Victima improves average end-to-end application performance by 7.4% (28.7%) over the baseline four-level radix-tree-based page table design and by 6.2% (20.1%) over a state-of-the-art software-managed TLB, across 11 diverse data-intensive workloads. Victima (i) is effective in both native and virtualized environments, (ii) is completely transparent to application and system software, and (iii) incurs very small area and power overheads on a modern high-end CPU.Comment: To appear in 56th IEEE/ACM International Symposium on Microarchitecture (MICRO), 202

    Automated Experiments for Deriving Performance-relevant Properties of Software Execution Environments

    Get PDF
    The execution environment can play a crucial role when analyzing the performance of a software system. However, detecting execution environment properties and integrating such properties into performance analyses is a manual, error-prone task. In this thesis, a novel approach for detecting performance-relevant properties of the software execution environment is presented. These properties are automatically detected using predefined experiments and integrated into performance prediction tools

    Hybrid Testbed for Security Research in Software-Defined Networks

    Get PDF
    Tele-operations require secure end-to-end Network Slicing leveraging Software-Defined Networking to meet the diverse requirements of multi-modal data streams. Research on network slicing needs tools to develop prototypes quickly that work on emulation and practical deployment. However, state-of-the-art tools focus only on emulation, needing more support for a mixed testbed, including hardware devices. We decouple the topology generating from the actual deployment on destination domains and apply a divide-and-conquer approach. The master coordinator generates an Intermediate Representation (IR) layer, a serialization of the topology. Via a toolchain, the worker coordinators at autonomous systems convert the IR into full or partial deployment scripts. The testbed introduces a marginal overhead by design, allowing for flexible deployment of complex topologies to study secure end-to-end Network Slicing

    Dynamic load balancing based on live migration of virtual machines: Security threats and effects

    Get PDF
    Live migration of virtual machines (VMs) is the process of transitioning a VM from one virtual machine monitor (VMM) to another without halting the guest operating system, often between distinct physical machines, has opened new opportunities in computing. It allows a clean separation between hardware and software, and facilitates fault management, load balancing, and low-level system maintenance. Implemented by several existing virtualization products, live migration also aids in aspects such as high availability services, transparent mobility and consolidated management. While virtualization and live migration enable important new functionality, the combination introduces novel security challenges. A virtual machine monitor that incorporates a vulnerable implementation of live migration functionality may expose both the guest and host operating system to attack and result in a compromise of integrity. Given the large and increasing market for virtualization technology, a comprehensive understanding of virtual machine migration security is essential. So the main idea behind this thesis is to create a test environment that is suitable for experimenting and analyzing the security implications in case of exploitation of Live Migration of Virtual Machines. Using Live VM migration for dynamic load balancing or scheduling, this study determines workload hotspots in physical environment and through use of effective Live Migration process; tries to carry out resource profiling. By carrying out effective profiling, this thesis research is able to determine how much of each resource needs to be allocated to a VM. To understand exactly why process migration would not work in such scenarios and better understand Live VM Migration, this thesis tries to provide requisite incites as to which model is most appropriate for automatic load balancing for virtual machine infrastructure based on resource consumption. The security implications of exploiting the process of migration may end in unexpected results or results that are not noticeable. The scope of this thesis research is identifying these results and the causes for them
    • …
    corecore