105,645 research outputs found

    Cyber Situational Awareness Using Live Hypervisor-Based Virtual Machine Introspection

    Get PDF
    In this research, a compiled memory analysis tool for virtualization (CMAT-V) is developed as a virtual machine introspection (VMI) utility to conduct live analysis during cyber attacks. CMAT-V leverages static memory dump analysis techniques to provide live dynamic system state data. Unlike some VMI applications, CMAT-V bridges the semantic gap using derivation techniques. CMAT-V detects Windows-based operating systems and uses the Microsoft Symbol Server to provide this context to the user. This research demonstrates the usefulness of CMAT-V as a situational awareness tool during cyber attacks, tests the detection of CMAT-V from the guest system level and measures its impact on host performance. During experimental testing, live system state information was successfully extracted from two simultaneously executing virtual machines (VM’s) under four rootkit-based malware attack scenarios. For each malware attack scenario, CMAT-V was able to provide evidence of the attack. Furthermore, data from CMAT-V detection testing did not confirm detection of the presence of CMAT-V’s live memory analysis from the VM itself. This supports the conclusion that CMAT-V does not create uniquely identifiable interference in the VM. Finally, three different benchmark tests reveal an 8% to 12% decrease in the host VM performance while CMAT-V is executing

    The Melbourne Shuffle: Improving Oblivious Storage in the Cloud

    Full text link
    We present a simple, efficient, and secure data-oblivious randomized shuffle algorithm. This is the first secure data-oblivious shuffle that is not based on sorting. Our method can be used to improve previous oblivious storage solutions for network-based outsourcing of data

    TrIMS: Transparent and Isolated Model Sharing for Low Latency Deep LearningInference in Function as a Service Environments

    Full text link
    Deep neural networks (DNNs) have become core computation components within low latency Function as a Service (FaaS) prediction pipelines: including image recognition, object detection, natural language processing, speech synthesis, and personalized recommendation pipelines. Cloud computing, as the de-facto backbone of modern computing infrastructure for both enterprise and consumer applications, has to be able to handle user-defined pipelines of diverse DNN inference workloads while maintaining isolation and latency guarantees, and minimizing resource waste. The current solution for guaranteeing isolation within FaaS is suboptimal -- suffering from "cold start" latency. A major cause of such inefficiency is the need to move large amount of model data within and across servers. We propose TrIMS as a novel solution to address these issues. Our proposed solution consists of a persistent model store across the GPU, CPU, local storage, and cloud storage hierarchy, an efficient resource management layer that provides isolation, and a succinct set of application APIs and container technologies for easy and transparent integration with FaaS, Deep Learning (DL) frameworks, and user code. We demonstrate our solution by interfacing TrIMS with the Apache MXNet framework and demonstrate up to 24x speedup in latency for image classification models and up to 210x speedup for large models. We achieve up to 8x system throughput improvement.Comment: In Proceedings CLOUD 201
    • …
    corecore