21 research outputs found

    50 years of isolation

    Get PDF
    The traditional means for isolating applications from each other is via the use of operating system provided “process” abstraction facilities. However, as applications now consist of multiple fine-grained components, the traditional process abstraction model is proving to be insufficient in ensuring this isolation. Statistics indicate that a high percentage of software failure occurs due to propagation of component failures. These observations are further bolstered by the attempts by modern Internet browser application developers, for example, to adopt multi-process architectures in order to increase robustness. Therefore, a fresh look at the available options for isolating program components is necessary and this paper provides an overview of previous and current research on the area

    Mely: Efficient Workstealing for Multicore Event-Driven Systems

    Get PDF
    Many high-performance communicating systems are designed using the event-driven paradigm. As multicore platforms are now pervasive, it becomes crucial for such systems to take advantage of the available hardware parallelism. Event-coloring is a promising approach in this regard. First, it allows programmers to simply and progressively inject support for the safe, parallel execution of multiple event handlers through the use of annotations. Second, it relies on a workstealing algorithm to dynamically balance the execution of event handlers on the available cores. This paper studies the impact of the workstealing algorithm on the overall system performance. We first show that the only existing workstealing algorithm designed for event-coloring runtimes is not always efficient: for instance, it causes a 33% performance degradation on a Web server. We then introduce several enhancements to improve the workstealing behavior. An evaluation using both microbenchmarks and real applications, a Web server and the Secure File Server (SFS), shows that our system consistently outperforms a state-of-the-art runtime (Libasync-smp), with or without workstealing. In particular, our new workstealing improves performance by up to +25% compared to Libasync-smp without workstealing and by up to +73% compared to the Libasync-smp workstealing algorithm, in the Web server case

    The importance of granularity in multiobjective optimization of mobile cloud hybrid applications

    Get PDF
    Mobile devices can now support a wide range of applications, many of which demand high computational power. Backed by the virtually unbounded resources of cloud computing, today's mobile cloud (MC) computing can meet the demands of even the most computationally and resource‐intensive applications. However, many existing MC hybrid applications are inefficient in terms of achieving objectives like minimizing battery power consumption and network bandwidth usage, which form a trade‐off. To counter this problem, we propose a data‐driven technique that (1) does instrumentation by allowing class‐, method‐, and hybrid‐level configurations to be applied to the MC hybrid application and (2) measures, at runtime, how well the MC hybrid application meets these two objectives by generating data that are used to optimize the efficiency trade‐off. Our experimental evaluation considers two MC hybrid Android‐based applications. We modularized them first based on the granularity and the computationally intensive modules of the apps. They are then executed using a simple mobile cloud application framework while measuring the power and bandwidth consumption at runtime. Finally, the outcome is a set of configurations that consists of (1) statistically significant and nondominated configurations in collapsible sets and (2) noncollapsible configurations. The analysis of our results shows that from the measured data, Pareto‐efficient configurations, in terms of minimizing the two objectives, of different levels of granularity of the apps can be obtained. Furthermore, the reduction of battery power consumption with the cost of network bandwidth usage, by using this technique, in the two MC hybrid applications was (1) 63.71% less power consumption in joules with the cost of using 1.07 MB of network bandwidth and (2) 34.98% less power consumption in joules with the cost of using 3.73 kB of network bandwidth

    Secure Virtualization of Latency-Constrained Systems

    Get PDF
    Virtualization is a mature technology in server and desktop environments where multiple systems are consolidate onto a single physical hardware platform, increasing the utilization of todays multi-core systems as well as saving resources such as energy, space and costs compared to multiple single systems. Looking at embedded environments reveals that many systems use multiple separate computing systems inside, including requirements for real-time and isolation properties. For example, modern high-comfort cars use up to a hundred embedded computing systems. Consolidating such diverse configurations promises to save resources such as energy and weight. In my work I propose a secure software architecture that allows consolidating multiple embedded software systems with timing constraints. The base of the architecture builds a microkernel-based operating system that supports a variety of different virtualization approaches through a generic interface, supporting hardware-assisted virtualization and paravirtualization as well as multiple architectures. Studying guest systems with latency constraints with regards to virtualization showed that standard techniques such as high-frequency time-slicing are not a viable approach. Generally, guest systems are a combination of best-effort and real-time work and thus form a mixed-criticality system. Further analysis showed that such systems need to export relevant internal scheduling information to the hypervisor to support multiple guests with latency constraints. I propose a mechanism to export those relevant events that is secure, flexible, has good performance and is easy to use. The thesis concludes with an evaluation covering the virtualization approach on the ARM and x86 architectures and two guest operating systems, Linux and FreeRTOS, as well as evaluating the export mechanism

    Separating Information Protection from Resource Management.

    Full text link
    Securing information in a computer system is becoming an intractable problem. Exacerbating the situation is the current paradigm of trusting an operating system for both security and resource management. One solution to this problem is to separate the role of protecting information from managing resources. This thesis studies the design and implementation of a system architecture called Software-Privacy Preserving Platform (SP3). SP3 creates a new layer that is more privileged than the operating system and responsible for providing information secrecy to user applications. SP3 provides page-granular memory secrecy protection by augmenting memory paging and interrupt mechanisms of a computer system in such a way that physical memory pages for user applications are rendered encrypted to the operating system. The resulting SP3 system therefore provides secrecy protection for the information contained in the memory of user applications. SP3 is implemented by modifying a hypervisor, which efficiently emulates the augmented semantics of paging and interrupt mechanism introduced by SP3. The modified hypervisor employs a couple of optimization techniques to reduce the number of costly page-wide block cipher operations. In the page-frame replication technique, the hypervisor internally keeps both encrypted and decrypted images of a page and relies on shadow page table redirection to map the correct page. In the lazy synchronization technique, the needed synchronization between the replicated images of the page is deferred as long as possible so that the synchronization happens not when an image is modified, but when the other image is actually accessed. This thesis further explores the challenges and solutions in the new programming environment introduced by SP3. This thesis also presents an SP3-based digital rights-management solution that can protect both the copy-protected multimedia contents and a trusted multimedia player program without limiting the end-users' freedom. In conclusion, this thesis demonstrates the feasibility of separating information protection from resource management in systems software. This separation greatly reduces the size and complexity of the trusted part for information protection, resulting in a more resilient system that can tolerate a compromise in the operating system.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/75886/1/jisooy_1.pd

    Autonomous migration of vertual machines for maximizing resource utilization

    Get PDF
    Virtualization of computing resources enables multiple virtual machines to run on a physical machine. When many virtual machines are deployed on a cluster of PCs, some physical machines will inevitably experience overload while others are under-utilized over time due to varying computational demands. This computational imbalance across the cluster undermines the very purpose of maximizing resource utilization through virtualization. To solve this imbalance problem, virtual machine migration has been introduced, where a virtual machine on a heavily loaded physical machine is selected and moved to a lightly loaded physical machine. The selection of the source virtual machine and the destination physical machine is based on a single fixed threshold value. Key to such threshold-based VM migration is to determine when to move which VM to what physical machine, since wrong or inadequate decisions can cause unnecessary migrations that would adversely affect the overall performance. The fixed threshold may not necessarily work for different computing infrastructures. Finding the optimal threshold is critical. In this research, a virtual machine migration framework is presented that autonomously finds and adjusts variable thresholds at runtime for different computing requirements to improve and maximize the utilization of computing resources. Central to this approach is the previous history of migrations and their effects before and after each migration in terms of standard deviation of utilization. To broaden this research, a proactive learning methodology is introduced that not only accumulates the past history of computing patterns and resulting migration decisions but more importantly searches all possibilities for the most suitable decisions. This research demonstrates through experimental results that the learning approach autonomously finds thresholds close to the optimal ones for different computing scenarios and that such varying thresholds yield an optimal number of VM migrations for maximizing resource utilization. The proposed framework is set up on a cluster of 8 and 16 PCs, each of which has multiple User-Mode Linux (UML)-based virtual machines. An extensive set of benchmark programs is deployed to closely resemble a real-world computing environment. Experimental results indicate that the proposed framework indeed autonomously finds thresholds close to the optimal ones for different computing scenarios, balances the load across the cluster through autonomous VM migration, and improves the overall performance of the dynamically changing computing environment

    Private and censorship-resistant communication over public networks

    Get PDF
    Society’s increasing reliance on digital communication networks is creating unprecedented opportunities for wholesale surveillance and censorship. This thesis investigates the use of public networks such as the Internet to build robust, private communication systems that can resist monitoring and attacks by powerful adversaries such as national governments. We sketch the design of a censorship-resistant communication system based on peer-to-peer Internet overlays in which the participants only communicate directly with people they know and trust. This ‘friend-to-friend’ approach protects the participants’ privacy, but it also presents two significant challenges. The first is that, as with any peer-to-peer overlay, the users of the system must collectively provide the resources necessary for its operation; some users might prefer to use the system without contributing resources equal to those they consume, and if many users do so, the system may not be able to survive. To address this challenge we present a new game theoretic model of the problem of encouraging cooperation between selfish actors under conditions of scarcity, and develop a strategy for the game that provides rational incentives for cooperation under a wide range of conditions. The second challenge is that the structure of a friend-to-friend overlay may reveal the users’ social relationships to an adversary monitoring the underlying network. To conceal their sensitive relationships from the adversary, the users must be able to communicate indirectly across the overlay in a way that resists monitoring and attacks by other participants. We address this second challenge by developing two new routing protocols that robustly deliver messages across networks with unknown topologies, without revealing the identities of the communication endpoints to intermediate nodes or vice versa. The protocols make use of a novel unforgeable acknowledgement mechanism that proves that a message has been delivered without identifying the source or destination of the message or the path by which it was delivered. One of the routing protocols is shown to be robust to attacks by malicious participants, while the other provides rational incentives for selfish participants to cooperate in forwarding messages
    corecore