22 research outputs found

    A Study of Dynamic Optimization Techniques: Lessons and Directions in Kernel Design

    Get PDF
    The Synthesis kernel [21,22,23,27,28] showed that dynamic code generation, software feedback, and fine-grain modular kernel organization are useful implementation techniques for improving the performance of operating system kernels. In addition, and perhaps more importantly, we discovered that there are strong interactions between the techniques. Hence, a careful and systematic combination of the techniques can be very powerful even though each one by itself may have serious limitations. By identifying these interactions we illustrate the problems of applying each technique in isolation to existing kernels. We also highlight the important common under-pinnings of the Synthesis experience and present our ideas on future operating system design and implementation. Finally, we outline a more uniform approach to dynamic optimizations called incremental partial evaluation

    Protection in commodity monolithic operating systems

    Get PDF
    This dissertation suggests and partially demonstrates that it is feasible to retrofit real privilege separation within commodity operating systems by "nesting" a small memory management protection domain inside a monolithic kernel's single-address space: all the while allowing both domains to operate at the same hardware privilege level. This dissertation also demonstrates a microarchitectural return-integrity protection domain that efficiently asserts dynamic "return-to-sender" semantics for all operating system return control-flow operations. Employing these protection domains, we provide mitigations to large classes of kernel attacks such as code injection and return-oriented programming and deploy information protection policies that are not feasible with existing systems. Operating systems form the foundation of information protection in multiprogramming environments. Unfortunately, today's commodity operating systems employ monolithic kernel design, where any single exploit in the vast code base undermines all information protection in the system because all kernel code operates with full supervisor privileges, meaning that even perfectly secure applications are vulnerable. This dissertation explores an approach that retrofits fundamental information protection design principles into commodity monolithic operating systems, the aim of which is a micro-evolution of commodity system design that incrementally decomposes monolithic operating systems from the ground up, thereby applying microkernel-like security properties for billions of users worldwide. The key contribution is the creation of a new operating system organization, the Nested Kernel Architecture, which "nests" a new, efficient intra-kernel memory isolation mechanism into a traditional monolithic operating system design. Using the Nested Kernel Architecture we introduce write-protection services for kernel developers to deploy security policies in ways not possible in current systems—while greatly reducing the trusted computing base—and demonstrate the value of these services by deploying three special data protection policies. Overall, the Nested Kernel Architecture demonstrates practical in-place protections that require only minor code modifications with minimal run- time overheads

    Exploring Software Compartmentalisation with Hardware Capabilities

    Get PDF

    Scalability of microkernel-based systems

    Get PDF

    Communication in Microkernel-Based Operating Systems

    Get PDF
    Communication in microkernel-based systems is much more frequent than system calls known from monolithic kernels. This can be attributed to the placement of system services into their own protection domains. Communication has to be fast to avoid unnecessary overhead. Also, communication channels in microkernel-based systems are used for more than just remote procedure calls. In distributed systems, which also have a componentized design, it is state of the art to use tools to generate stubs for the communication between components. The communication interfaces of components are described in an interface definition language (IDL). In contrast to distributed systems, components of a microkernel-based system run on the same architecture and message delivery is guaranteed. In this Thesis, I explore the different kinds of communication, which can be used in microkernel-based systems, as well as their possible representation in IDL. Specifically, I introduce the syntax to describe kernel objects in IDL. I discuss the complexity of IDL compilers and its relation to the complexity of the IDL. Furthermore, I evaluate the performance of the communication stubs generated by different IDL compilers and discuss techniques to minimize performance overhead in generated stubs. I validated these techniques by implementing the Drops IDL Compiler - Dice. Finally, this Thesis presents a mechanism to measure the frequency and performance of invocations of generated communication code. I used this technique to conduct measurements in highly complex systems and introducing the least possible overhead

    Communication in Microkernel-Based Operating Systems

    Get PDF
    Communication in microkernel-based systems is much more frequent than system calls known from monolithic kernels. This can be attributed to the placement of system services into their own protection domains. Communication has to be fast to avoid unnecessary overhead. Also, communication channels in microkernel-based systems are used for more than just remote procedure calls. In distributed systems, which also have a componentized design, it is state of the art to use tools to generate stubs for the communication between components. The communication interfaces of components are described in an interface definition language (IDL). In contrast to distributed systems, components of a microkernel-based system run on the same architecture and message delivery is guaranteed. In this Thesis, I explore the different kinds of communication, which can be used in microkernel-based systems, as well as their possible representation in IDL. Specifically, I introduce the syntax to describe kernel objects in IDL. I discuss the complexity of IDL compilers and its relation to the complexity of the IDL. Furthermore, I evaluate the performance of the communication stubs generated by different IDL compilers and discuss techniques to minimize performance overhead in generated stubs. I validated these techniques by implementing the Drops IDL Compiler - Dice. Finally, this Thesis presents a mechanism to measure the frequency and performance of invocations of generated communication code. I used this technique to conduct measurements in highly complex systems and introducing the least possible overhead

    Communication in Microkernel-Based Operating Systems

    Get PDF
    Communication in microkernel-based systems is much more frequent than system calls known from monolithic kernels. This can be attributed to the placement of system services into their own protection domains. Communication has to be fast to avoid unnecessary overhead. Also, communication channels in microkernel-based systems are used for more than just remote procedure calls. In distributed systems, which also have a componentized design, it is state of the art to use tools to generate stubs for the communication between components. The communication interfaces of components are described in an interface definition language (IDL). In contrast to distributed systems, components of a microkernel-based system run on the same architecture and message delivery is guaranteed. In this Thesis, I explore the different kinds of communication, which can be used in microkernel-based systems, as well as their possible representation in IDL. Specifically, I introduce the syntax to describe kernel objects in IDL. I discuss the complexity of IDL compilers and its relation to the complexity of the IDL. Furthermore, I evaluate the performance of the communication stubs generated by different IDL compilers and discuss techniques to minimize performance overhead in generated stubs. I validated these techniques by implementing the Drops IDL Compiler - Dice. Finally, this Thesis presents a mechanism to measure the frequency and performance of invocations of generated communication code. I used this technique to conduct measurements in highly complex systems and introducing the least possible overhead

    The exokernel operating system architecture

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.Includes bibliographical references (p. 115-120).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.On traditional operating systems only trusted software such as privileged servers or the kernel can manage resources. This thesis proposes a new approach, the exokernel architecture, which makes resource management unprivileged but safe by separating management from protection: an exokernel protects resources, while untrusted application-level software manages them. As a result, in an exokernel system, untrusted software (e.g., library operating systems) can implement abstractions such as virtual memory, file systems, and networking. Themain thrusts of this thesis are: (1) how to build an exokernel system; (2) whether it is possible to build a real one; and (3) whether doing so is a good idea. Our results, drawn from two exokernel systems [25, 48], show that the approach yields dramatic benefits. For example, Xok, an exokernel, runs a web server an order of magnitude faster than the closest equivalent on the same hardware, common unaltered Unix applications up to three times faster, and improves global system performance up to a factor of five. The thesis also discusses some of the new techniques we have used to remove the overhead of protection. Themost unusual technique, untrusted deterministic functions, enables an exokernel to verify that applications correctly track the resources they own, eliminating the need for it to do so. Additionally, the thesis reflects on the subtle issues in using downloaded code for extensibility and the sometimes painful lessons learned in building three exokernel-based systems.by Dawson R. Engler.Ph.D

    Systems Support for Trusted Execution Environments

    Get PDF
    Cloud computing has become a default choice for data processing by both large corporations and individuals due to its economy of scale and ease of system management. However, the question of trust and trustoworthy computing inside the Cloud environments has been long neglected in practice and further exacerbated by the proliferation of AI and its use for processing of sensitive user data. Attempts to implement the mechanisms for trustworthy computing in the cloud have previously remained theoretical due to lack of hardware primitives in the commodity CPUs, while a combination of Secure Boot, TPMs, and virtualization has seen only limited adoption. The situation has changed in 2016, when Intel introduced the Software Guard Extensions (SGX) and its enclaves to the x86 ISA CPUs: for the first time, it became possible to build trustworthy applications relying on a commonly available technology. However, Intel SGX posed challenges to the practitioners who discovered the limitations of this technology, from the limited support of legacy applications and integration of SGX enclaves into the existing system, to the performance bottlenecks on communication, startup, and memory utilization. In this thesis, our goal is enable trustworthy computing in the cloud by relying on the imperfect SGX promitives. To this end, we develop and evaluate solutions to issues stemming from limited systems support of Intel SGX: we investigate the mechanisms for runtime support of POSIX applications with SCONE, an efficient SGX runtime library developed with performance limitations of SGX in mind. We further develop this topic with FFQ, which is a concurrent queue for SCONE's asynchronous system call interface. ShieldBox is our study of interplay of kernel bypass and trusted execution technologies for NFV, which also tackles the problem of low-latency clocks inside enclave. The two last systems, Clemmys and T-Lease are built on a more recent SGXv2 ISA extension. In Clemmys, SGXv2 allows us to significantly reduce the startup time of SGX-enabled functions inside a Function-as-a-Service platform. Finally, in T-Lease we solve the problem of trusted time by introducing a trusted lease primitive for distributed systems. We perform evaluation of all of these systems and prove that they can be practically utilized in existing systems with minimal overhead, and can be combined with both legacy systems and other SGX-based solutions. In the course of the thesis, we enable trusted computing for individual applications, high-performance network functions, and distributed computing framework, making a <vision of trusted cloud computing a reality

    Off, un nuevo enfoque en la construcción de Sistemas Operativos Distribuidos

    Full text link
    La extensión alcanzada por las redes de ordenadores ha provocado la aparición y extensión de Sistemas Operativos Distribuidos (SSOODD) [158], como alternativa de futuro a los Sistemas Operativos centralizados. El enfoque más común para la construcción de Sistemas Operativos Distribuidos [158] consiste en la realización de servicios distribuidos sobre un μkernel centralizado encargado de operar en cada uno de los nodos de la red (μkernel servicios distribuidos). Esta estrategia entorpece la distribución del sistema y conduce a sistemas no adaptables debido a que los μkernels empleados suministran abstracciones alejadas del hardware. Por otro lado, los Sistemas Operativos Distribuidos todavía no han alcanzando niveles de transparencia, flexibilidad, adaptabilidad y aprovechamiento de recursos comparables a los existentes en Sistemas Centralizados [160]. Conviene pues explorar otras alternativas, en la construcción de dichos sistemas, que permitan paliar esta situación. Esta tesis propone un enfoque radicalmente diferente en la construcción de Sistemas Operativos Distribuidos: distribuir el sistema justo desde el nivel inferior haciendo uso de un μkernel distribuido soportando abstracciones próximas al hardware. Nuestro enfoque podría resumirse con la frase Construyamos Sistemas Operativos basados en un μkernel distribuido en lugar de construir Sistemas Operativos Distribuidos basados en un μkernel. Afirmamos que con el enfoque propuesto (μkernel distribuido adaptable servicios) se podrían conseguir importantes ventajas [148] con respecto al enfoque habitual (μkernel servicios distribuidos): más transparencia, mejor aprovechamiento de los recursos, sistemas m´as flexibles y mayores cotas de adaptabilidad; sistemas más eficientes, fiables y escalables
    corecore