408 research outputs found

    Doctor of Philosophy

    Get PDF
    dissertationIn the past few years, we have seen a tremendous increase in digital data being generated. By 2011, storage vendors had shipped 905 PB of purpose-built backup appliances. By 2013, the number of objects stored in Amazon S3 had reached 2 trillion. Facebook had stored 20 PB of photos by 2010. All of these require an efficient storage solution. To improve space efficiency, compression and deduplication are being widely used. Compression works by identifying repeated strings and replacing them with more compact encodings while deduplication partitions data into fixed-size or variable-size chunks and removes duplicate blocks. While we have seen great improvements in space efficiency from these two approaches, there are still some limitations. First, traditional compressors are limited in their ability to detect redundancy across a large range since they search for redundant data in a fine-grain level (string level). For deduplication, metadata embedded in an input file changes more frequently, and this introduces more unnecessary unique chunks, leading to poor deduplication. Cloud storage systems suffer from unpredictable and inefficient performance because of interference among different types of workloads. This dissertation proposes techniques to improve the effectiveness of traditional compressors and deduplication in improving space efficiency, and a new IO scheduling algorithm to improve performance predictability and efficiency for cloud storage systems. The common idea is to utilize similarity. To improve the effectiveness of compression and deduplication, similarity in content is used to transform an input file into a compression- or deduplication-friendly format. We propose Migratory Compression, a generic data transformation that identifies similar data in a coarse-grain level (block level) and then groups similar blocks together. It can be used as a preprocessing stage for any traditional compressor. We find metadata have a huge impact in reducing the benefit of deduplication. To isolate the impact from metadata, we propose to separate metadata from data. Three approaches are presented for use cases with different constrains. For the commonly used tar format, we propose Migratory Tar: a data transformation and also a new tar format that deduplicates better. We also present a case study where we use deduplication to reduce storage consumption for storing disk images, while at the same time achieving high performance in image deployment. Finally, we apply the same principle of utilizing similarity in IO scheduling to prevent interference between random and sequential workloads, leading to efficient, consistent, and predictable performance for sequential workloads and a high disk utilization

    Tailoring the Cyber Security Framework: How to Overcome the Complexities of Secure Live Virtual Machine Migration in Cloud Computing

    Get PDF
    This paper proposes a novel secure live virtual machine migration framework by using a virtual trusted platform module instance to improve the integrity of the migration process from one virtual machine to another on the same platform. The proposed framework, called Kororā, is designed and developed on a public infrastructure-as-a-service cloud-computing environment and runs concurrently on the same hardware components (Input/Output, Central Processing Unit, Memory) and the same hypervisor (Xen); however, a combination of parameters needs to be evaluated before implementing Kororā. The implementation of Kororā is not practically feasible in traditional distributed computing environments. It requires fixed resources with high-performance capabilities, connected through a high-speed, reliable network. The following research objectives were determined to identify the integrity features of live virtual machine migration in the cloud system: To understand the security issues associated with cloud computing, virtual trusted platform modules, virtualization, live virtual machine migration, and hypervisors; To identify the requirements for the proposed framework, including those related to live VM migration among different hypervisors; To design and validate the model, processes, and architectural features of the proposed framework; To propose and implement an end-to-end security architectural blueprint for cloud environments, providing an integrated view of protection mechanisms, and then to validate the proposed framework to improve the integrity of live VM migration. This is followed by a comprehensive review of the evaluation system architecture and the proposed framework state machine. The overarching aim of this paper, therefore, is to present a detailed analysis of the cloud computing security problem, from the perspective of cloud architectures and the cloud service delivery models. Based on this analysis, this study derives a detailed specification of the cloud live virtual machine migration integrity problem and key features that should be covered by the proposed framewor

    ON OPTIMIZATIONS OF VIRTUAL MACHINE LIVE STORAGE MIGRATION FOR THE CLOUD

    Get PDF
    Virtual Machine (VM) live storage migration is widely performed in the data cen- ters of the Cloud, for the purposes of load balance, reliability, availability, hardware maintenance and system upgrade. It entails moving all the state information of the VM being migrated, including memory state, network state and storage state, from one physical server to another within the same data center or across different data centers. To minimize its performance impact, this migration process is required to be transparent to applications running within the migrating VM, meaning that ap- plications will keep running inside the VM as if there were no migration operations at all. In this dissertation, a thorough literature review is conducted to provide a big picture of the VM live storage migration process, its problems and existing solutions. After an in-depth examination, we observe that a severe IO interference between the VM IO threads and migration IO threads exists and causes both types of the IO threads to suffer from performance degradation. This interference stems from the fact that both types of IO threads share the same critical IO path by reading from and writing to the same shared storage system. Owing to IO resource contention and requests interference between the two different types of IO requests, not only will the IO request queue lengthens in the storage system, but the time-consuming disk seek operations will also become more frequent. Based on this fundamental observation, this dissertation research presents three related but orthogonal solutions that tackle the IO interference problem in order to improve the VM live storage migration performance. First, we introduce the Workload-Aware IO Outsourcing scheme, called WAIO, to improve the VM live storage migration efficiency. Second, we address this problem by proposing a novel scheme, called SnapMig, to improve the VM live storage migration efficiency and eliminate its performance impact on user applications at the source server by effectively leveraging the existing VM snapshots in the backup servers. Third, we propose the IOFollow scheme to improve both the VM performance and migration performance simultaneously. Finally, we outline the direction for the future research work. Advisor: Hong Jian

    A comprehensive meta-analysis of cryptographic security mechanisms for cloud computing

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.The concept of cloud computing offers measurable computational or information resources as a service over the Internet. The major motivation behind the cloud setup is economic benefits, because it assures the reduction in expenditure for operational and infrastructural purposes. To transform it into a reality there are some impediments and hurdles which are required to be tackled, most profound of which are security, privacy and reliability issues. As the user data is revealed to the cloud, it departs the protection-sphere of the data owner. However, this brings partly new security and privacy concerns. This work focuses on these issues related to various cloud services and deployment models by spotlighting their major challenges. While the classical cryptography is an ancient discipline, modern cryptography, which has been mostly developed in the last few decades, is the subject of study which needs to be implemented so as to ensure strong security and privacy mechanisms in today’s real-world scenarios. The technological solutions, short and long term research goals of the cloud security will be described and addressed using various classical cryptographic mechanisms as well as modern ones. This work explores the new directions in cloud computing security, while highlighting the correct selection of these fundamental technologies from cryptographic point of view

    ORC: Increasing cloud memory density via object reuse with capabilities

    Get PDF
    Cloud environments host many tenants, and typically there is substantial overlap between the application binaries and libraries executed by tenants. Thus, memory de-duplication can increase memory density by allocating memory for shared binaries only once. Existing de-duplication approaches, however, either rely on a shared OS to de-deduplicate binary objects, which provides unacceptably weak isolation; or exploit hypervisor-based de-duplication at the level of memory pages, which is blind to the semantics of the objects to be shared. We describe Object Reuse with Capabilities (ORC), which supports the fine-grained sharing of binary objects between tenants, while isolating tenants strongly through a small trusted computing base (TCB). ORC uses hardware sup- port for memory capabilities to isolate tenants, which permits shared objects to be accessible to multiple tenants safely. Since ORC shares binary objects within a single address space through capabilities, it uses a new relocation type to create per-tenant state when loading shared objects. ORC supports the loading of objects by an untrusted guest, outside of its TCB, only verifying the safety of the loaded data. Our experiments show that ORC achieves a higher memory density with a lower overhead than hypervisor-based de-deduplication

    The Virtual Block Interface: A Flexible Alternative to the Conventional Virtual Memory Framework

    Full text link
    Computers continue to diversify with respect to system designs, emerging memory technologies, and application memory demands. Unfortunately, continually adapting the conventional virtual memory framework to each possible system configuration is challenging, and often results in performance loss or requires non-trivial workarounds. To address these challenges, we propose a new virtual memory framework, the Virtual Block Interface (VBI). We design VBI based on the key idea that delegating memory management duties to hardware can reduce the overheads and software complexity associated with virtual memory. VBI introduces a set of variable-sized virtual blocks (VBs) to applications. Each VB is a contiguous region of the globally-visible VBI address space, and an application can allocate each semantically meaningful unit of information (e.g., a data structure) in a separate VB. VBI decouples access protection from memory allocation and address translation. While the OS controls which programs have access to which VBs, dedicated hardware in the memory controller manages the physical memory allocation and address translation of the VBs. This approach enables several architectural optimizations to (1) efficiently and flexibly cater to different and increasingly diverse system configurations, and (2) eliminate key inefficiencies of conventional virtual memory. We demonstrate the benefits of VBI with two important use cases: (1) reducing the overheads of address translation (for both native execution and virtual machine environments), as VBI reduces the number of translation requests and associated memory accesses; and (2) two heterogeneous main memory architectures, where VBI increases the effectiveness of managing fast memory regions. For both cases, VBI significanttly improves performance over conventional virtual memory

    Memory Page Stability and its Application to Memory Deduplication

    Get PDF
    In virtualized environments, typically cloud computing environments, multiple virtual machines run on the same physical host. These virtual machines usually run the same operating systems and applications. This results in a lot of duplicate data blocks in memory. Memory deduplication is a memory optimization technique that attempts to remove this redundancy by storing one copy of these duplicate blocks in the machine memory which in turn results in a better utilization of the available memory capacity.In this dissertation, we characterize the nature of memory pages that contribute to memory deduplication techniques. We show how such characterization can give useful insights towards better design and implementation of software and hardware-assisted memory deduplication systems. In addition, we also quantify the performance impact of different memory deduplication techniques and show that even though memory deduplication allows for a better cache hierarchy performance, there is a performance overhead associated with copy-on-write exceptions that is associated with diverging pages.We propose a generic prediction framework that is capable of predicting the stability of memory pages based on the page flags available through the Linux kernel. We evaluate the proposed prediction framework and then discuss various applications that can benefit from it, specifically memory deduplication and live migration

    An Info-Leak Resistant Kernel Randomization

    Full text link
    [EN] Given the significance that the cloud paradigm has in modern society, it is extremely important to provide security to users at all levels, especially at the most fundamental ones since these are the most sensitive and potentially harmful in the event of an attack. However, the cloud computing paradigm brings new challenges in which security mechanisms are weakened or deactivated to improve profitability and exploitation of the available resources. Kernel randomization is an important security mechanism that is currently present in all main operating systems. Function-Granular Kernel Randomization is a new step that aims to be the future of the kernel randomization, because it provides much more security than current kernel randomization approaches. Unfortunately, function-granular kernel randomization also impacts significantly on the performance and potential benefits of memory deduplication. Both function-granular kernel randomization and memory deduplication are desired and beneficial; the first for the strong protection it gives, and the second for the reduction of costs in terms of memory consumption. In this paper, we analyse the impact of function-granular kernel randomization on memory deduplication revealing why it cannot offer maximum security and shareability of memory simultaneously. We also discuss the reasons why having a full position independent kernel code counter-intuitively does not solve the problem introducing a challenge to kernel randomization designers. To solve these problems, we propose a function-granular kernel randomization modification for cloud systems that enables full function-granular kernel randomization while reduces memory deduplication cancellations to almost zero. The proposed approach forces guest kernels of the same tenant to have the same random memory layout of memory regions with high impact on deduplication, ensuring a high rate of deduplicated pages while the kernel randomization is fully enabled. Our approach enables cloud providers to have both, high levels of security and an efficient use of resources.Vañó-García, F.; Marco-Gisbert, H. (2020). An Info-Leak Resistant Kernel Randomization. IEEE Access. 8:161612-161629. https://doi.org/10.1109/ACCESS.2020.3019774S161612161629
    corecore