111 research outputs found

    LibrettOS: A Dynamically Adaptable Multiserver-Library OS

    Full text link
    We present LibrettOS, an OS design that fuses two paradigms to simultaneously address issues of isolation, performance, compatibility, failure recoverability, and run-time upgrades. LibrettOS acts as a microkernel OS that runs servers in an isolated manner. LibrettOS can also act as a library OS when, for better performance, selected applications are granted exclusive access to virtual hardware resources such as storage and networking. Furthermore, applications can switch between the two OS modes with no interruption at run-time. LibrettOS has a uniquely distinguishing advantage in that, the two paradigms seamlessly coexist in the same OS, enabling users to simultaneously exploit their respective strengths (i.e., greater isolation, high performance). Systems code, such as device drivers, network stacks, and file systems remain identical in the two modes, enabling dynamic mode switching and reducing development and maintenance costs. To illustrate these design principles, we implemented a prototype of LibrettOS using rump kernels, allowing us to reuse existent, hardened NetBSD device drivers and a large ecosystem of POSIX/BSD-compatible applications. We use hardware (VM) virtualization to strongly isolate different rump kernel instances from each other. Because the original rumprun unikernel targeted a much simpler model for uniprocessor systems, we redesigned it to support multicore systems. Unlike kernel-bypass libraries such as DPDK, applications need not be modified to benefit from direct hardware access. LibrettOS also supports indirect access through a network server that we have developed. Applications remain uninterrupted even when network components fail or need to be upgraded. Finally, to efficiently use hardware resources, applications can dynamically switch between the indirect and direct modes based on their I/O load at run-time. [full abstract is in the paper]Comment: 16th ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments (VEE '20), March 17, 2020, Lausanne, Switzerlan

    Formal Verification of Demand Paging

    No full text

    Formal specification and verification of a microkernel

    Get PDF
    This thesis basically splits up into two parts. The first part introduces the abstract model of the Vamos kernel. The Vamos kernel provides the infrastructure for process and memory management, priority-based round-robin scheduling, communication with external devices, as well as inter-process communication. In the second part, we formulate a simulation theorem between the abstract Vamos model and the concrete Vamos implementation. The crucial points of the theorem are, on the one hand, the abstraction relation connecting the datastructures of the implementation with those of the model and, on the other hand, the implementation invariant formulating validity statements on the datastructures. Besides the exact formal definitions of the abstraction relation and the implementation invariant, we prove substantial parts of the simulation theorem. This work is part of the Verisoft project which aims at the pervasive formal verification of computer systems. For the modelling and the verification of the Vamos kernel this entails the integration of various computational models, for instance, Communicating Virtual Machines (Cvm) encapsulating the hardware-specific low-level functionality, and devices. The models and proofs presented in this thesis are formalized in the uniform logical framework of the interactive theorem prover Isabelle/HOL, and hence, it is rigorously checked that all verification results fit together.Die vorliegende Arbeit teilt sich im Wesentlichen in zwei Teile auf. Im ersten Teil wird das abstrakte Modell des Vamos-Kernels vorgestellt. Der Vamos-Kernel liefert die Infrastruktur für Prozess- und Speicherverwaltung, prioritäts-basiertes Round-Robin-Scheduling, Kommunikation mit externen Geräten, sowie Interprozesskommunikation. Im zweiten Teil der Arbeit formulieren wir ein Simulationstheorem zwischen dem abstrakten Vamos- Modell und der konkreten Vamos-Implementierung. Kernpunkte dieses Theorems sind zum einen die Abstraktionsrelation, die die Datenstrukturen der Implementierung mit denen des Modells in Beziehung setzt und zum anderen die Implementierungsinvariante, die Gültigkeitsaussagen über die Datenstrukturen trifft. Neben den exakten Definitionen der Abstraktionsrelation und der Implementierungsinvariante, werden wesentliche Teile dieses Simulationstheorems bewiesen. Die Arbeit wurde im Rahmen des Verisoft Projekts angefertigt, das die durchgängige formale Verifiktaion von Computersystemen zum Ziel hat. Für die Modellierung und Verifikation des Vamos-Kernels hat dies zur Folge, dass diverse Berechnungsmodelle integriert werden müssen, unter anderem das Gerätemodell und Communicating Virtual Machines (Cvm), das die hardwarespezifische und systemnahe Funktionalität kapselt. Alle Modelle und Beweise, die in dieser Arbeit vorgestellt werden, sind in dem interaktiven Theorembeweiser Isabelle/HOL formalisiert worden, womit sichergestellt ist, dass alle Ergebnisse der Verifikation zusammenpassen

    Vulnerability detection in device drivers

    Get PDF
    Tese de doutoramento, Informática (Ciência da Computação), Universidade de Lisboa, Faculdade de Ciências, 2017The constant evolution in electronics lets new equipment/devices to be regularly made available on the market, which has led to the situation where common operating systems (OS) include many device drivers(DD) produced by very diverse manufactures. Experience has shown that the development of DD is error prone, as a majority of the OS crashes can be attributed to flaws in their implementation. This thesis addresses the challenge of designing methodologies and tools to facilitate the detection of flaws in DD, contributing to decrease the errors in this kind of software, their impact in the OS stability, and the security threats caused by them. This is especially relevant because it can help developers to improve the quality of drivers during their implementation or when they are integrated into a system. The thesis work started by assessing how DD flaws can impact the correct execution of the Windows OS. The employed approach used a statistical analysis to obtain the list of kernel functions most used by the DD, and then automatically generated synthetic drivers that introduce parameter errors when calling a kernel function, thus mimicking a faulty interaction. The experimental results showed that most targeted functions were ineffective in the defence of the incorrect parameters. A reasonable number of crashes and a small number of hangs were observed suggesting a poor error containment capability of these OS functions. Then, we produced an architecture and a tool that supported the automatic injection of network attacks in mobile equipment (e.g., phone), with the objective of finding security flaws (or vulnerabilities) in Wi-Fi drivers. These DD were selected because they are of easy access to an external adversary, which simply needs to create malicious traffic to exploit them, and therefore the flaws in their implementation could have an important impact. Experiments with the tool uncovered a previously unknown vulnerability that causes OS hangs, when a specific value was assigned to the TIM element in the Beacon frame. The experiments also revealed a potential implementation problem of the TCP-IP stack by the use of disassociation frames when the target device was associated and authenticated with a Wi-Fi access point. Next, we developed a tool capable of registering and instrumenting the interactions between a DD and the OS. The solution used a wrapper DD around the binary of the driver under test, enabling full control over the function calls and parameters involved in the OS-DD interface. This tool can support very diverse testing operations, including the log of system activity and to reverse engineer the driver behaviour. Some experiments were performed with the tool, allowing to record the insights of the behaviour of the interactions between the DD and the OS, the parameter values and return values. Results also showed the ability to identify bugs in drivers, by executing tests based on the knowledge obtained from the driver’s dynamics. Our final contribution is a methodology and framework for the discovery of errors and vulnerabilities in Windows DD by resorting to the execution of the drivers in a fully emulated environment. This approach is capable of testing the drivers without requiring access to the associated hardware or the DD source code, and has a granular control over each machine instruction. Experiments performed with Off the Shelf DD confirmed a high dependency of the correctness of the parameters passed by the OS, identified the precise location and the motive of memory leaks, the existence of dormant and vulnerable code.A constante evolução da eletrónica tem como consequência a disponibilização regular no mercado de novos equipamentos/dispositivos, levando a uma situação em que os sistemas operativos (SO) mais comuns incluem uma grande quantidade de gestores de dispositivos (GD) produzidos por diversos fabricantes. A experiência tem mostrado que o desenvolvimento dos GD é sujeito a erros uma vez que a causa da maioria das paragens do SO pode ser atribuída a falhas na sua implementação. Esta tese centra-se no desafio da criação de metodologias e ferramentas que facilitam a deteção de falhas nos GD, contribuindo para uma diminuição nos erros neste tipo de software, o seu impacto na estabilidade do SO, e as ameaças de segurança por eles causadas. Isto é especialmente relevante porque pode ajudar a melhorar a qualidade dos GD tanto na sua implementação como quando estes são integrados em sistemas. Este trabalho inicia-se com uma avaliação de como as falhas nos GD podem levar a um funcionamento incorreto do SO Windows. A metodologia empregue usa uma análise estatística para obter a lista das funções do SO que são mais utilizadas pelos GD, e posteriormente constrói GD sintéticos que introduzem erros nos parâmetros passados durante a chamada às funções do SO, e desta forma, imita a integração duma falta. Os resultados das experiências mostraram que a maioria das funções testadas não se protege eficazmente dos parâmetros incorretos. Observou-se a ocorrência de um número razoável de paragens e um pequeno número de bloqueios, o que sugere uma pobre capacidade das funções do SO na contenção de erros. Posteriormente, produzimos uma arquitetura e uma ferramenta que suporta a injeção automática de ataques em equipamentos móveis (e.g., telemóveis), com o objetivo de encontrar falhas de segurança (ou vulnerabilidades) em GD de placas de rede Wi-Fi. Estes GD foram selecionados porque são de fácil acesso a um atacante remoto, o qual apenas necessita de criar tráfego malicioso para explorar falhas na sua implementação podendo ter um impacto importante. As experiências realizadas com a ferramenta revelaram uma vulnerabilidade anteriormente desconhecida que provoca um bloqueio no SO quando é atribuído um valor específico ao campo TIM da mensagem de Beacon. As experiências também revelaram um potencial problema na implementação do protocolo TCP-IP no uso das mensagens de desassociação quando o dispositivo alvo estava associado e autenticado com o ponto de acesso Wi-Fi. A seguir, desenvolvemos uma ferramenta com a capacidade de registar e instrumentar as interações entre os GD e o SO. A solução usa um GD que envolve o código binário do GD em teste, permitindo um controlo total sobre as chamadas a funções e aos parâmetros envolvidos na interface SO-GD. Esta ferramenta suporta diversas operações de teste, incluindo o registo da atividade do sistema e compreensão do comportamento do GD. Foram realizadas algumas experiências com esta ferramenta, permitindo o registo das interações entre o GD e o SO, os valores dos parâmetros e os valores de retorno das funções. Os resultados mostraram a capacidade de identificação de erros nos GD, através da execução de testes baseados no conhecimento da dinâmica do GD. A nossa contribuição final é uma metodologia e uma ferramenta para a descoberta de erros e vulnerabilidades em GD Windows recorrendo à execução do GD num ambiente totalmente emulado. Esta abordagem permite testar GD sem a necessidade do respetivo hardware ou o código fonte, e possuí controlo granular sobre a execução de cada instrução máquina. As experiências realizadas com GD disponíveis comercialmente confirmaram a grande dependência que os GD têm nos parâmetros das funções do SO, e identificaram o motivo e a localização precisa de fugas de memória, a existência de código não usado e vulnerável

    Small TCBs of policy-controlled operating systems

    Get PDF
    IT Systeme mit qualitativ hohen Sicherheitsanforderungen verwenden zur Beschreibung, Analyse und Implementierung ihrer Sicherheitseigenschaften zunehmend problemspezifische Sicherheitspolitiken, welche ein wesentlicher Bestandteil der Trusted Computing Base (TCB) eines IT Systems sind. Aus diesem Grund sind die Korrektheit und Unumgehbarkeit der Implementierung einer TCB entscheidend, um die geforderten Sicherheitseigenschaften eines Systems herzustellen, zu wahren und zu garantieren. Viele der heutigen Betriebssysteme zeigen, welche Herausforderung die Realisierung von Sicherheitspolitiken darstellt; seit mehr als 40 Jahren unterstützen sie wahlfreie identitätsbasierte Zugriffssteuerungspolitiken nur rudimentär. Dies führt dazu, dass große Teile der Sicherheitspolitiken von Anwendersoftware durch die Anwendungen selbst implementiert werden. Infolge dessen sind die TCBs heutiger Betriebssysteme groß, heterogen und verteilt, so dass die exakte Bestimmung ihres Funktionsumfangs sehr aufwendig ist. Im Ergebnis sind die wesentlichen Eigenschaften von TCBs - Korrektheit, Robustheit und Unumgehbarkeit - nur schwer erreichbar. Dies hat zur Entwicklung von Politik gesteuerten Betriebssystemen geführt, die alle Sicherheitspolitiken eines Betriebssystems und seiner Anwendungen zentral zusammenfassen, indem sie Kernabstraktionen für Sicherheitspolitiken und Politiklaufzeitumgebungen anbieten. Aktuelle Politik gesteuerte Betriebssysteme basieren auf monolithischen Architekturen, was dazu führt, dass ihre Komponenten zur Durchsetzung ihrer Politiken im Betriebssystemkern verteilt sind. Weiterhin verfolgen sie das Ziel, ein möglichst breites Spektrum an Sicherheitspolitiken zu unterstützen. Dies hat zur Folge, dass ihre Laufzeitkomponenten für Politikentscheidung und -durchsetzung universal sind. Im Ergebnis sind ihre TCB-Implementierungen groß und komplex, so dass der TCB- Funktionsumfang nur schwer identifiziert werden kann und wesentliche Eigenschaften von TCBs nur mit erhöhtem Aufwand erreichbar sind. Diese Dissertation verfolgt einen Ansatz, der die TCBs Politik gesteuerter Betriebssysteme systematisch entwickelt. Die Idee ist, das Laufzeitsystem für Sicherheitspolitiken so maßzuschneidern, dass nur die Politiken unterstützt werden, die tatsächlich in einer TCB vorhanden sind. Dabei wird der Funktionsumfang einer TCB durch kausale Abhängigkeiten zwischen Sicherheitspolitiken und TCB-Funktionen bestimmt. Das Ergebnis sind kausale TCBs, die nur diejenigen Funktionen enthalten, die zum Durchsetzen und zum Schutz der vorhandenen Sicherheitspolitiken notwendig sind. Die präzise Identifikation von TCB-Funktionen erlaubt, die Implementierung der TCB-Funktionen von nicht-vertrauenswürdigen Systemkomponenten zu isolieren. Dadurch legen kausale TCBs die Grundlage für TCB-Implementierungen, deren Größe und Komplexität eine Analyse und Verifikation bezüglich ihrer Korrektheit und Unumgehbarkeit ermöglichen. Kausale TCBs haben ein breites Anwendungsspektrum - von eingebetteten Systemen über Politik gesteuerte Betriebssysteme bis hin zu Datenbankmanagementsystemen in großen Informationssystemen.Policy-controlled operating systems provide a policy decision and enforcement environment to protect and enforce their security policies. The trusted computing base (TCB) of these systems are large and complex, and their functional perimeter can hardly be precisely identified. As a result, a TCB's correctness and tamper-proofness are hard to ensure in its implementation. This dissertation develops a TCB engineering method for policy-controlled operating systems that tailors the policy decision and enforcement environment to support only those policies that are actually present in a TCB. A TCB's functional perimeter is identified by exploiting causal dependencies between policies and TCB functions, which results in causal TCBs that contain exactly those functions that are necessary to establish, enforce, and protect their policies. The precise identification of a TCB's functional perimeter allows for implementing a TCB in a safe environment that indeed can be isolated from untrusted system components. Thereby, causal TCB engineering sets the course for implementations whose size and complexity pave the way for analyzing and verifying a TCB's correctness and tamper-proofness.Auch im Buchhandel erhältlich: Small TCBs of policy-controlled operating systems / Anja Pölck Ilmenau : Univ.-Verl. Ilmenau, 2014. - xiii, 249 S. ISBN 978-3-86360-090-7 Preis: 24,40

    Doctor of Philosophy

    Get PDF
    dissertationA modern software system is a composition of parts that are themselves highly complex: operating systems, middleware, libraries, servers, and so on. In principle, compositionality of interfaces means that we can understand any given module independently of the internal workings of other parts. In practice, however, abstractions are leaky, and with every generation, modern software systems grow in complexity. Traditional ways of understanding failures, explaining anomalous executions, and analyzing performance are reaching their limits in the face of emergent behavior, unrepeatability, cross-component execution, software aging, and adversarial changes to the system at run time. Deterministic systems analysis has a potential to change the way we analyze and debug software systems. Recorded once, the execution of the system becomes an independent artifact, which can be analyzed offline. The availability of the complete system state, the guaranteed behavior of re-execution, and the absence of limitations on the run-time complexity of analysis collectively enable the deep, iterative, and automatic exploration of the dynamic properties of the system. This work creates a foundation for making deterministic replay a ubiquitous system analysis tool. It defines design and engineering principles for building fast and practical replay machines capable of capturing complete execution of the entire operating system with an overhead of several percents, on a realistic workload, and with minimal installation costs. To enable an intuitive interface of constructing replay analysis tools, this work implements a powerful virtual machine introspection layer that enables an analysis algorithm to be programmed against the state of the recorded system through familiar terms of source-level variable and type names. To support performance analysis, the replay engine provides a faithful performance model of the original execution during replay

    5th SC@RUG 2008 proceedings:Student Colloquium 2007-2008

    Get PDF

    5th SC@RUG 2008 proceedings:Student Colloquium 2007-2008

    Get PDF
    corecore