98 research outputs found

    Doctor of Philosophy

    Get PDF
    dissertationA modern software system is a composition of parts that are themselves highly complex: operating systems, middleware, libraries, servers, and so on. In principle, compositionality of interfaces means that we can understand any given module independently of the internal workings of other parts. In practice, however, abstractions are leaky, and with every generation, modern software systems grow in complexity. Traditional ways of understanding failures, explaining anomalous executions, and analyzing performance are reaching their limits in the face of emergent behavior, unrepeatability, cross-component execution, software aging, and adversarial changes to the system at run time. Deterministic systems analysis has a potential to change the way we analyze and debug software systems. Recorded once, the execution of the system becomes an independent artifact, which can be analyzed offline. The availability of the complete system state, the guaranteed behavior of re-execution, and the absence of limitations on the run-time complexity of analysis collectively enable the deep, iterative, and automatic exploration of the dynamic properties of the system. This work creates a foundation for making deterministic replay a ubiquitous system analysis tool. It defines design and engineering principles for building fast and practical replay machines capable of capturing complete execution of the entire operating system with an overhead of several percents, on a realistic workload, and with minimal installation costs. To enable an intuitive interface of constructing replay analysis tools, this work implements a powerful virtual machine introspection layer that enables an analysis algorithm to be programmed against the state of the recorded system through familiar terms of source-level variable and type names. To support performance analysis, the replay engine provides a faithful performance model of the original execution during replay

    Exploring New Paradigms for Mobile Edge Computing

    Get PDF
    Edge computing has been rapidly growing in recent years to meet the surging demands from mobile apps and Internet of Things (IoT). Similar to the Cloud, edge computing provides computation, storage, data, and application services to the end-users. However, edge computing is usually deployed at the edge of the network, which can provide low-latency and high-bandwidth services for end devices. So far, edge computing is still not widely adopted. One significant challenge is that the edge computing environment is usually heterogeneous, involving various operating systems and platforms, which complicates app development and maintenance. in this dissertation, we explore to combine edge computing with virtualization techniques to provide a homogeneous environment, where edge nodes and end devices run exactly the same operating system. We develop three systems based on the homogeneous edge computing environment to improve the security and usability of end-device applications. First, we introduce vTrust, a new mobile Trusted Execution Environment (TEE), which offloads the general execution and storage of a mobile app to a nearby edge node and secures the I/O between the edge node and the mobile device with the aid of a trusted hypervisor on the mobile device. Specifically, vTrust establishes an encrypted I/O channel between the local hypervisor and the edge node, such that any sensitive data flowing through the hosted mobile OS is encrypted. Second, we present MobiPlay, a record-and-replay tool for mobile app testing. By collaborating a mobile phone with an edge node, MobiPlay can effectively record and replay all types of input data on the mobile phone without modifying the mobile operating system. to do so, MobiPlay runs the to-be-tested application on the edge node under exactly the same environment as the mobile device and allows the tester to operate the application on a mobile device. Last, we propose vRent, a new mechanism to leverage smartphone resources as edge node based on Xen virtualization and MiniOS. vRent aims to mitigate the shortage of available edge nodes. vRent enforces isolation and security by making the users\u27 android OSes as Guest OSes and rents the resources to a third-party in the form of MiniOSes

    Thin Hypervisor-Based Security Architectures for Embedded Platforms

    Get PDF
    Virtualization has grown increasingly popular, thanks to its benefits of isolation, management, and utilization, supported by hardware advances. It is also receiving attention for its potential to support security, through hypervisor-based services and advanced protections supplied to guests. Today, virtualization is even making inroads in the embedded space, and embedded systems, with their security needs, have already started to benefit from virtualization’s security potential. In this thesis, we investigate the possibilities for thin hypervisor-based security on embedded platforms. In addition to significant background study, we present implementation of a low-footprint, thin hypervisor capable of providing security protections to a single FreeRTOS guest kernel on ARM. Backed by performance test results, our hypervisor provides security to a formerly unsecured kernel with minimal performance overhead, and represents a first step in a greater research effort into the security advantages and possibilities of embedded thin hypervisors. Our results show that thin hypervisors are both possible and beneficial even on limited embedded systems, and sets the stage for more advanced investigations, implementations, and security applications in the future

    Mobile Agent Based Cloud Computing

    Get PDF
    Cloud Computing is becoming a revolutionizing computing paradigm. It offers various types of services and applications that are being delivered in the internet cloud. The services aim at providing reliable, fault tolerant dynamic computing environment to the user and offers computing resources as per demand. Skype, Dropbox, and Yahoo mail are some of the cloud services that have major impact in our lives. Several measures are taken to maintain the quality of its service in the cloud and to make IT infrastructure available with low cost. This paper presents various aspects of Cloud Computing, its implementation features, challenges and also explores the potential scope for research. The major section of this paper includes surveys of studies related to the possibilities of integrating Mobile Agents in Cloud Computing, since these technologies appear to be promising and marketable. Thus, the paper focuses on resolving challenges and bolstering services of Cloud Computing by utilizing Mobile Agent technology in various aspects of Cloud Computing

    Distributed Shared Memory based Live VM Migration

    Get PDF
    Cloud computing is the new trend in computing services and IT industry, this computing paradigm has numerous benefits to utilize IT infrastructure resources and reduce services cost. The key feature of cloud computing depends on mobility and scalability of the computing resources, by managing virtual machines. The virtualization decouples the software from the hardware and manages the software and hardware resources in an easy way without interruption of services. Live virtual machine migration is an essential tool for dynamic resource management in current data centers. Live virtual machine is defined as the process of moving a running virtual machine or application between different physical machines without disconnecting the client or application. Many techniques have been developed to achieve this goal based on several metrics (total migration time, downtime, size of data sent and application performance) that are used to measure the performance of live migration. These metrics measure the quality of the VM services that clients care about, because the main goal of clients is keeping the applications performance with minimum service interruption. The pre-copy live VM migration is done in four phases: preparation, iterative migration, stop and copy, and resume and commitment. During the preparation phase, the source and destination physical servers are selected, the resources in destination physical server are reserved, and the critical VM is selected to be migrated. The cloud manager responsibility is to make all of these decisions. VM state migration takes place and memory state is transferred to the target node during iterative migration phase. Meanwhile, the migrated VM continues to execute and dirties its memory. In the stop and copy phase, VM virtual CPU is stopped and then the processor and network states are transferred to the destination host. Service downtime results from stopping VM execution and moving the VM CPU and network states. Finally in the resume and commitment phase, the migrated VM is resumed running in the destination physical host, the remaining memory pages are pulled by destination machine from the source machine. The source machine resources are released and eliminated. In this thesis, pre-copy live VM migration using Distributed Shared Memory (DSM) computing model is proposed. The setup is built using two identical computation nodes to construct all the proposed environment services architecture namely the virtualization infrastructure (Xenserver6.2 hypervisor), the shared storage server (the network file system), and the DSM and High Performance Computing (HPC) cluster. The custom DSM framework is based on a low latency memory update named Grappa. Moreover, HPC cluster is used to parallelize the work load by using CPUs computation nodes. HPC cluster employs OPENMPI and MPI libraries to support parallelization and auto-parallelization. The DSM allows the cluster CPUs to access the same memory space pages resulting in less memory data updates, which reduces the amount of data transferred through the network. The thesis proposed model achieves a good enhancement of the live VM migration metrics. Downtime is reduced by 50 % in the idle workload of Windows VM and 66.6% in case of Ubuntu Linux idle workload. In general, the proposed model not only reduces the downtime and the total amount of data sent, but also does not degrade other metrics like the total migration time and the applications performance

    Performance evaluation of VM-level record-and-replay techniques and applications

    Get PDF
    Virtual machine level record and replay can be used for complex system debugging and analysis, fault-tolerance replication and forensic analysis. Previous work on performance evaluation of RnR frameworks are not complete enough due to their narrow focuses. RnR related projects either focus on performance evaluation of plain record and replay mechanisms or specifically target the effectiveness of the functionality RnR supports. In order to identify the performance bottlenecks in the complicated RnR system and its various applications, this thesis conducts a thorough evaluation and analysis on 3 different modes of RnR, that is, record, replay with checkpointing and replay with VMI analysis. Both RnR system developer and users can benefit from our work. With our evaluation results, system developer can propose more efficient design accordingly, and RnR users can configure the system properly to achieve expected performance

    Building Computing-As-A-Service Mobile Cloud System

    Get PDF
    The last five years have witnessed the proliferation of smart mobile devices, the explosion of various mobile applications and the rapid adoption of cloud computing in business, governmental and educational IT deployment. There is also a growing trends of combining mobile computing and cloud computing as a new popular computing paradigm nowadays. This thesis envisions the future of mobile computing which is primarily affected by following three trends: First, servers in cloud equipped with high speed multi-core technology have been the main stream today. Meanwhile, ARM processor powered servers is growingly became popular recently and the virtualization on ARM systems is also gaining wide ranges of attentions recently. Second, high-speed internet has been pervasive and highly available. Mobile devices are able to connect to cloud anytime and anywhere. Third, cloud computing is reshaping the way of using computing resources. The classic pay/scale-as-you-go model allows hardware resources to be optimally allocated and well-managed. These three trends lend credence to a new mobile computing model with the combination of resource-rich cloud and less powerful mobile devices. In this model, mobile devices run the core virtualization hypervisor with virtualized phone instances, allowing for pervasive access to more powerful, highly-available virtual phone clones in the cloud. The centralized cloud, powered by rich computing and memory recourses, hosts virtual phone clones and repeatedly synchronize the data changes with virtual phone instances running on mobile devices. Users can flexibly isolate different computing environments. In this dissertation, we explored the opportunity of leveraging cloud resources for mobile computing for the purpose of energy saving, performance augmentation as well as secure computing enviroment isolation. We proposed a framework that allows mo- bile users to seamlessly leverage cloud to augment the computing capability of mobile devices and also makes it simpler for application developers to run their smartphone applications in the cloud without tedious application partitioning. This framework was built with virtualization on both server side and mobile devices. It has three building blocks including agile virtual machine deployment, efficient virtual resource management, and seamless mobile augmentation. We presented the design, imple- mentation and evaluation of these three components and demonstrated the feasibility of the proposed mobile cloud model
    • …
    corecore