271 research outputs found

    Simulating Windows-Based Cyber Attacks Using Live Virtual Machine Introspection

    Get PDF
    Static memory analysis has been proven a valuable technique for digital forensics. However, the memory capture technique halts the system causing the loss of important dynamic system data. As a result, live analysis techniques have emerged to complement static analysis. In this paper, a compiled memory analysis tool for virtualization (CMAT-V) is presented as a virtual machine introspection (VMI) utility to conduct live analysis during simulated cyber attacks. CMAT-V leverages static memory dump analysis techniques to provide live system state awareness. CMAT-V parses an arbitrary memory dump from a simulated guest operating system (OS) to extract user information, network usage, active process information and registry files. Unlike some VMI applications, CMAT-V bridges the semantic gap using derivation techniques. This provides increased operating system compatibility for current and future operating systems. This research demonstrates the usefulness of CMAT-V as a situational awareness tool during simulated cyber attacks and measures the overall performance of CMAT-V

    CloudScope: diagnosing and managing performance interference in multi-tenant clouds

    Get PDF
    © 2015 IEEE.Virtual machine consolidation is attractive in cloud computing platforms for several reasons including reduced infrastructure costs, lower energy consumption and ease of management. However, the interference between co-resident workloads caused by virtualization can violate the service level objectives (SLOs) that the cloud platform guarantees. Existing solutions to minimize interference between virtual machines (VMs) are mostly based on comprehensive micro-benchmarks or online training which makes them computationally intensive. In this paper, we present CloudScope, a system for diagnosing interference for multi-tenant cloud systems in a lightweight way. CloudScope employs a discrete-time Markov Chain model for the online prediction of performance interference of co-resident VMs. It uses the results to optimally (re)assign VMs to physical machines and to optimize the hypervisor configuration, e.g. the CPU share it can use, for different workloads. We have implemented CloudScope on top of the Xen hypervisor and conducted experiments using a set of CPU, disk, and network intensive workloads and a real system (MapReduce). Our results show that CloudScope interference prediction achieves an average error of 9%. The interference-aware scheduler improves VM performance by up to 10% compared to the default scheduler. In addition, the hypervisor reconfiguration can improve network throughput by up to 30%

    Benchmarking communication middleware for cloud computing virtualizers

    Get PDF
    REACTION 2013. 2nd International Workshop on Real-time and distributed computing in emerging applications. December 3rd, 2013, Vancouver, Canada.Virtualization technologies typically introduce additional overhead that is specially challenging for specific domains such as real-time systems. One of the sources of overhead are the additional software layers that provide parallel execution environments which reduce the effective performance given by the infrastructure. This work identifies the factors to be analysed by a benchmark for performance evaluation of a virtualized middleware. It provides the set of benchmark tests that evaluate empirically the overhead and stability on a trendy communication middleware, DDS (Data Distribution System for Real-Time), which enables message transmissions via publisher-subscriber (P/S) interactions. Two different implementations, RTI and OpenSplice, have been analysed over a general purpose virtual machine monitor to evaluate their behavior on a client-server application. Obtained results have provided initial execution clues on the performance that a virtualized communication middleware like DDS can exhibit

    CloudMon: a resource-efficient IaaS cloud monitoring system based on networked intrusion detection system virtual appliances

    Get PDF
    The networked intrusion detection system virtual appliance (NIDS-VA), also known as virtualized NIDS, plays an important role in the protection and safeguard of IaaS cloud environments. However, it is nontrivial to guarantee both of the performance of NIDS-VA and the resource efficiency of cloud applications because both are sharing computing resources in the same cloud environment. To overcome this challenge and trade-off, we propose a novel system, named CloudMon, which enables dynamic resource provision and live placement for NIDS-VAs in IaaS cloud environments. CloudMon provides two techniques to maintain high resource efficiency of IaaS cloud environments without degrading the performance of NIDS-VAs and other virtual machines (VMs). The first technique is a virtual machine monitor based resource provision mechanism, which can minimize the resource usage of a NIDS-VA with given performance guarantee. It uses a fuzzy model to characterize the complex relationship between performance and resource demands of a NIDS-VA and develops an online fuzzy controller to adaptively control the resource allocation for NIDS-VAs under varying network traffic. The second one is a global resource scheduling approach for optimizing the resource efficiency of the entire cloud environments. It leverages VM migration to dynamically place NIDS-VAs and VMs. An online VM mapping algorithm is designed to maximize the resource utilization of the entire cloud environment. Our virtual machine monitor based resource provision mechanism has been evaluated by conducting comprehensive experiments based on Xen hypervisor and Snort NIDS in a real cloud environment. The results show that the proposed mechanism can allocate resources for a NIDS-VA on demand while still satisfying its performance requirements. We also verify the effectiveness of our global resource scheduling approach by comparing it with two classic vector packing algorithms, and the results show that our approach improved the resource utilization of cloud environments and reduced the number of in-use NIDS-VAs and physical hosts.The authors gratefully acknowledge the anonymous reviewers for their helpful suggestions and insightful comments to improve the quality of the paper. The work reported in this paper has been partially supported by National Nature Science Foundation of China (No. 61202424, 61272165, 91118008), China 863 program (No. 2011AA01A202), Natural Science Foundation of Jiangsu Province of China (BK20130528) and China 973 Fundamental R&D Program (2011CB302600)

    Energy and Performance: Management of Virtual Machines: Provisioning, Placement, and Consolidation

    Get PDF
    Cloud computing is a new computing paradigm that offers scalable storage and compute resources to users on demand through Internet. Public cloud providers operate large-scale data centers around the world to handle a large number of users request. However, data centers consume an immense amount of electrical energy that can lead to high operating costs and carbon emissions. One of the most common and effective method in order to reduce energy consumption is Dynamic Virtual Machines Consolidation (DVMC) enabled by the virtualization technology. DVMC dynamically consolidates Virtual Machines (VMs) into the minimum number of active servers and then switches the idle servers into a power-saving mode to save energy. However, maintaining the desired level of Quality-of-Service (QoS) between data centers and their users is critical for satisfying users’ expectations concerning performance. Therefore, the main challenge is to minimize the data center energy consumption while maintaining the required QoS. This thesis address this challenge by presenting novel DVMC approaches to reduce the energy consumption of data centers and improve resource utilization under workload independent quality of service constraints. These approaches can be divided into three main categories: heuristic, meta-heuristic and machine learning. Our first contribution is a heuristic algorithm for solving the DVMC problem. The algorithm uses a linear regression-based prediction model to detect over-loaded servers based on the historical utilization data. Then it migrates some VMs from the over-loaded servers to avoid further performance degradations. Moreover, our algorithm consolidates VMs on fewer number of server for energy saving. The second and third contributions are two novel DVMC algorithms based on the Reinforcement Learning (RL) approach. RL is interesting for highly adaptive and autonomous management in dynamic environments. For this reason, we use RL to solve two main sub-problems in VM consolidation. The first sub-problem is the server power mode detection (sleep or active). The second sub-problem is to find an effective solution for server status detection (overloaded or non-overloaded). The fourth contribution of this thesis is an online optimization meta-heuristic algorithm called Ant Colony System-based Placement Optimization (ACS-PO). ACS is a suitable approach for VM consolidation due to the ease of parallelization, that it is close to the optimal solution, and its polynomial worst-case time complexity. The simulation results show that ACS-PO provides substantial improvement over other heuristic algorithms in reducing energy consumption, the number of VM migrations, and performance degradations. Our fifth contribution is a Hierarchical VM management (HiVM) architecture based on a three-tier data center topology which is very common use in data centers. HiVM has the ability to scale across many thousands of servers with energy efficiency. Our sixth contribution is a Utilization Prediction-aware Best Fit Decreasing (UP-BFD) algorithm. UP-BFD can avoid SLA violations and needless migrations by taking into consideration the current and predicted future resource requirements for allocation, consolidation, and placement of VMs. Finally, the seventh and the last contribution is a novel Self-Adaptive Resource Management System (SARMS) in data centers. To achieve scalability, SARMS uses a hierarchical architecture that is partially inspired from HiVM. Moreover, SARMS provides self-adaptive ability for resource management by dynamically adjusting the utilization thresholds for each server in data centers.Siirretty Doriast

    Effective Resource and Workload Management in Data Centers

    Get PDF
    The increasing demand for storage, computation, and business continuity has driven the growth of data centers. Managing data centers efficiently is a difficult task because of the wide variety of datacenter applications, their ever-changing intensities, and the fact that application performance targets may differ widely. Server virtualization has been a game-changing technology for IT, providing the possibility to support multiple virtual machines (VMs) simultaneously. This dissertation focuses on how virtualization technologies can be utilized to develop new tools for maintaining high resource utilization, for achieving high application performance, and for reducing the cost of data center management.;For multi-tiered applications, bursty workload traffic can significantly deteriorate performance. This dissertation proposes an admission control algorithm AWAIT, for handling overloading conditions in multi-tier web services. AWAIT places on hold requests of accepted sessions and refuses to admit new sessions when the system is in a sudden workload surge. to meet the service-level objective, AWAIT serves the requests in the blocking queue with high priority. The size of the queue is dynamically determined according to the workload burstiness.;Many admission control policies are triggered by instantaneous measurements of system resource usage, e.g., CPU utilization. This dissertation first demonstrates that directly measuring virtual machine resource utilizations with standard tools cannot always lead to accurate estimates. A directed factor graph (DFG) model is defined to model the dependencies among multiple types of resources across physical and virtual layers.;Virtualized data centers always enable sharing of resources among hosted applications for achieving high resource utilization. However, it is difficult to satisfy application SLOs on a shared infrastructure, as application workloads patterns change over time. AppRM, an automated management system not only allocates right amount of resources to applications for their performance target but also adjusts to dynamic workloads using an adaptive model.;Server consolidation is one of the key applications of server virtualization. This dissertation proposes a VM consolidation mechanism, first by extending the fair load balancing scheme for multi-dimensional vector scheduling, and then by using a queueing network model to capture the service contentions for a particular virtual machine placement
    corecore