9 research outputs found
Doctor of Philosophy
dissertationA modern software system is a composition of parts that are themselves highly complex: operating systems, middleware, libraries, servers, and so on. In principle, compositionality of interfaces means that we can understand any given module independently of the internal workings of other parts. In practice, however, abstractions are leaky, and with every generation, modern software systems grow in complexity. Traditional ways of understanding failures, explaining anomalous executions, and analyzing performance are reaching their limits in the face of emergent behavior, unrepeatability, cross-component execution, software aging, and adversarial changes to the system at run time. Deterministic systems analysis has a potential to change the way we analyze and debug software systems. Recorded once, the execution of the system becomes an independent artifact, which can be analyzed offline. The availability of the complete system state, the guaranteed behavior of re-execution, and the absence of limitations on the run-time complexity of analysis collectively enable the deep, iterative, and automatic exploration of the dynamic properties of the system. This work creates a foundation for making deterministic replay a ubiquitous system analysis tool. It defines design and engineering principles for building fast and practical replay machines capable of capturing complete execution of the entire operating system with an overhead of several percents, on a realistic workload, and with minimal installation costs. To enable an intuitive interface of constructing replay analysis tools, this work implements a powerful virtual machine introspection layer that enables an analysis algorithm to be programmed against the state of the recorded system through familiar terms of source-level variable and type names. To support performance analysis, the replay engine provides a faithful performance model of the original execution during replay
Recommended from our members
Secure and Trusted Execution Framework for Virtualized Workloads
In this dissertation, we have analyzed various security and trustworthy solutions for modern computing systems and proposed a framework that will provide holistic security and trust for the entire lifecycle of a virtualized workload. The framework consists of 3 novel techniques and a set of guidelines. These 3 techniques provide necessary elements for secure and trusted execution environment while the guidelines ensure that the virtualized workload remains in a secure and trusted state throughout its lifecycle. We have successfully implemented and demonstrated that the framework provides security and trust guarantees at the time of launch, any time during the execution, and during an update of the virtualized workload. Given the proliferation of virtualization from cloud servers to embedded systems, techniques presented in this dissertation can be implemented on most computing systems
Extensible Performance-Aware Runtime Integrity Measurement
Today\u27s interconnected world consists of a broad set of online activities including banking, shopping, managing health records, and social media while relying heavily on servers to manage extensive sets of data. However, stealthy rootkit attacks on this infrastructure have placed these servers at risk. Security researchers have proposed using an existing x86 CPU mode called System Management Mode (SMM) to search for rootkits from a hardware-protected, isolated, and privileged location. SMM has broad visibility into operating system resources including memory regions and CPU registers. However, the use of SMM for runtime integrity measurement mechanisms (SMM-RIMMs) would significantly expand the amount of CPU time spent away from operating system and hypervisor (host software) control, resulting in potentially serious system impacts. To be a candidate for production use, SMM RIMMs would need to be resilient, performant and extensible.
We developed the EPA-RIMM architecture guided by the principles of extensibility, performance awareness, and effectiveness. EPA-RIMM incorporates a security check description mechanism that allows dynamic changes to the set of resources to be monitored. It minimizes system performance impacts by decomposing security checks into shorter tasks that can be independently scheduled over time. We present a performance methodology for SMM to quantify system impacts, as well as a simulator that allows for the evaluation of different methods of scheduling security inspections. Our SMM-based EPA-RIMM prototype leverages insights from the performance methodology to detect host software rootkits at reduced system impacts. EPA-RIMM demonstrates that SMM-based rootkit detection can be made performance-efficient and effective, providing a new tool for defense
Architecture de sécurité de bout en bout et mécanismes d'autoprotection pour les environnements Cloud
Since several years the virtualization of infrastructures became one of the major research challenges, consuming less energy while delivering new services. However, many attacks hinder the global adoption of Cloud computing. Self-protection has recently raised growing interest as possible element of answer to the cloud computing infrastructure protection challenge. Yet, previous solutions fall at the last hurdle as they overlook key features of the cloud, by lack of flexible security policies, cross-layered defense, multiple control granularities, and open security architectures. This thesis presents VESPA, a self-protection architecture for cloud infrastructures. Flexible coordination between self-protection loops allows enforcing a rich spectrum of security strategies. A multi-plane extensible architecture also enables simple integration of commodity security components.Recently, some of the most powerful attacks against cloud computing infrastructures target the Virtual Machine Monitor (VMM). In many case, the main attack vector is a poorly confined device driver. Current architectures offer no protection against such attacks. This thesis proposes an altogether different approach by presenting KungFuVisor, derived from VESPA, a framework to build self-defending hypervisors. The result is a very flexible self-protection architecture, enabling to enforce dynamically a rich spectrum of remediation actions over different parts of the VMM, also facilitating defense strategy administration. We showed the application to three different protection scheme: virus infection, mobile clouds and hypervisor drivers. Indeed VESPA can enhance cloud infrastructure securityLa virtualisation des infrastructures est devenue un des enjeux majeurs dans la recherche, qui fournissent des consommations d'Ă©nergie moindres et des nouvelles opportunitĂ©s. Face Ă de multiples menaces et des mĂ©canismes de dĂ©fense hĂ©tĂ©rogĂšnes, l'approche autonomique propose une gestion simplifiĂ©e, robuste et plus efficace de la sĂ©curitĂ© du cloud. Aujourd'hui, les solutions existantes s'adaptent difficilement. Il manque des politiques de sĂ©curitĂ© flexibles, une dĂ©fense multi-niveaux, des contrĂŽles Ă granularitĂ© variable, ou encore une architecture de sĂ©curitĂ© ouverte. Ce mĂ©moire prĂ©sente VESPA, une architecture d'autoprotection pour les infrastructures cloud. VESPA est construit autour de politiques qui peuvent rĂ©guler la sĂ©curitĂ© Ă plusieurs niveaux. La coordination flexible entre les boucles d'autoprotection rĂ©alise un large spectre de stratĂ©gies de sĂ©curitĂ© comme des dĂ©tections et des rĂ©actions sur plusieurs niveaux. Une architecture extensible multi plans permet d'intĂ©grer simplement des Ă©lĂ©ments dĂ©jĂ prĂ©sents. Depuis peu, les attaques les plus critiques contre les infrastructures cloud visent la brique la plus sensible: l'hyperviseur. Le vecteur d'attaque principal est un pilote de pĂ©riphĂ©rique mal confinĂ©. Les mĂ©canismes de dĂ©fense mis en jeu sont statiques et difficile Ă gĂ©rer. Nous proposons une approche diffĂ©rente avec KungFuVisor, un canevas logiciel pour crĂ©er des hyperviseurs autoprotĂ©gĂ©s spĂ©cialisant l'architecture VESPA. Nous avons montrĂ© son application Ă trois types de protection diffĂ©rents : les attaques virales, la gestion hĂ©tĂ©rogĂšne multi-domaines et l'hyperviseur. Ainsi la sĂ©curitĂ© des infrastructures cloud peut ĂȘtre amĂ©liorĂ©e grĂące Ă VESP
Secure and safe virtualization-based framework for embedded systems development
Tese de Doutoramento - Programa Doutoral em Engenharia ElectrĂłnica e de Computadores (PDEEC)The Internet of Things (IoT) is here. Billions of smart, connected devices are proliferating
at rapid pace in our key infrastructures, generating, processing and exchanging
vast amounts of security-critical and privacy-sensitive data. This strong connectivity
of IoT environments demands for a holistic, end-to-end security approach, addressing
security and privacy risks across different abstraction levels: device, communications,
cloud, and lifecycle managment.
Security at the device level is being misconstrued as the addition of features in a
late stage of the system development. Several software-based approaches such as
microkernels, and virtualization have been used, but it is proven, per se, they fail in
providing the desired security level. As a step towards the correct operation of these
devices, it is imperative to extend them with new security-oriented technologies
which guarantee security from the outset.
This thesis aims to conceive and design a novel security and safety architecture
for virtualized systems by 1) evaluating which technologies are key enablers for
scalable and secure virtualization, 2) designing and implementing a fully-featured
virtualization environment providing hardware isolation 3) investigating which "hard
entities" can extend virtualization to guarantee the security requirements dictated by
confidentiality, integrity, and availability, and 4) simplifying system configurability
and integration through a design ecosystem supported by a domain-specific language.
The developed artefacts demonstrate: 1) why ARM TrustZone is nowadays a reference
technology for security, 2) how TrustZone can be adequately exploited for
virtualization in different use-cases, 3) why the secure boot process, trusted execution
environment and other hardware trust anchors are essential to establish and
guarantee a complete root and chain of trust, and 4) how a domain-specific language
enables easy design, integration and customization of a secure virtualized
system assisted by the above mentioned building blocks.Vivemos na era da Internet das Coisas (IoT). BiliÔes de dispositivos inteligentes
começam a proliferar nas nossas infraestruturas chave, levando ao processamento
de avolumadas quantidades de dados privados e sensĂveis. Esta forte conectividade
inerente ao conceito IoT necessita de uma abordagem holĂstica, em que os riscos
de privacidade e segurança são abordados nas diferentes camadas de abstração:
dispositivo, comunicaçÔes, nuvem e ciclo de vida.
A segurança ao nĂvel dos dispositivos tem sido erradamente assegurada pela inclusĂŁo
de funcionalidades numa fase tardia do desenvolvimento. TĂȘm sido utilizadas diversas
abordagens de software, incluindo a virtualização, mas estå provado que estas
nĂŁo conseguem garantir o nĂvel de segurança desejado. De forma a garantir a correta
operação dos dispositivos, é fundamental complementar os mesmos com novas tecnologias
que promovem a segurança desde os primeiros estågios de desenvolvimento.
Esta tese propÔe, assim, o desenvolvimento de uma solução arquitetural inovadora
para sistemas virtualizados seguros, contemplando 1) a avaliação de tecnologias
chave que promovam tal realização, 2) a implementação de uma solução de virtualização
garantindo isolamento por hardware, 3) a identificação de componentes
que integrados permitirão complementar a virtualização para garantir os requisitos
de segurança, e 4) a simplificação do processo de configuração e integração da solução
atravĂ©s de um ecossistema suportado por uma linguagem de domĂnio especĂfico.
Os artefactos desenvolvidos demonstram: 1) o porquĂȘ da tecnologia ARM TrustZone
ser uma tecnologia de referĂȘncia para a segurança, 2) a efetividade desta tecnologia
quando utilizada em diferentes domĂnios, 3) o porquĂȘ do processo seguro de inicialização,
juntamente com um ambiente de execução seguro e outros componentes de
hardware, serem essenciais para estabelecer uma cadeia de confiança, e 4) a viabilidade
em utilizar uma linguagem de um domĂnio especĂfico para configurar e integrar
um ambiente virtualizado suportado pelos artefactos supramencionados
Self-secured devices: securing shared device access on TrustZone-based systems
Dissertação de mestrado em Engenharia Eletrónica Industrial e ComputadoresWith the advent of the Internet of Things (IoT), security emerged as a significant
requirement in the embedded systems development. Attacks against embedded
systems infrastructures have been increasing, because security is being
misconstrued as the addition of features to the system in a later stage of the system
development. A new change in the way that systems are being developed is
needed, to start guaranteeing security from the outset.
ARM Trustzone is a hardware technology that adds significant value to the
security picture. TrustZone promotes hardware as the initial root of trust and
has been gaining particular attention in the embedded space due to the massive
presence of ARM processors into the market. TrustZone technology splits the
hardware and software resources into two worlds - the secure world, dedicated
to the secure processing, and the non-secure world for everything else. A lot of
research has been done around TrustZone technology, ranging from efficient and
secure virtualization solutions to trusted execution environments (TEE). Both
cases, despite targeting different applications with different requirements, consolidate
multiple virtual environments into the same platform and necessarily need to
share resources among them. Currently, hardware devices on TrustZone-enabled
system-on-chips (SoC) can only be configured as secure or non-secure, which means
the dual-world concept of TrustZone is not spread to the devices itself. With this
direct assignment method both worlds are unable to use the same device unless
it is entirely duplicated, significantly increasing overall hardware costs. Existing
shared device access on TrustZone-based architectures have been shown to negatively
impact the overall system in terms of security and performance, besides
often come with associated engineering effort or substantial hardware costs.
This thesis proposes the concept of self-secured devices, a novel approach for
shared device access in TrustZone-based architectures. Self-secured devices extend
the TrustZone dual-world concept to the inner logic of the device by splitting
the deviceâs hardware logic into a secure and non-secure interface. The
implemented solution was deployed on LTZVisor, an open-source and in-house
lightweight TrustZone-assisted hypervisor, and the achieved results are encouraging,
demonstrating that we increase the security properties of the system for an
acceptable cost in terms of hardware.Com o advento da Internet das Coisas (IoT), começaram a surgir mais preocupaçÔes
relativas à segurança no desenvolvimento de sistemas embebidos. Os
ataques contra infraestruturas deste tipo de sistemas tĂȘm vindo a aumentar exponencialmente,
dado que a segurança tem vindo a ser reforçada através da adição
de vårias funcionalidades ao invés de ser considerada desde a fase inicial de desenvolvimento
do sistema.
ARM TrustZone, Ă© um exemplo de uma tecnologia de hardware que veio contribuir
significativamente para o panorama de segurança. A tecnologia TrustZone
promove o hardware como base inicial de segurança, tendo vindo a ganhar particular
relevùncia em soluçÔes de sistemas embebidos devido à presença massiva dos
processadores ARM no mercado. A tecnologia TrustZone separa todos os recursos
de software e hardware em dois ambientes de execução diferentes, os quais são
denominados de mundo seguro, onde Ă© realizado todo o processamento seguro, e o
mundo não seguro para tudo o resto. Esta tecnologia jå foi alvo de bastante investigação
e tem sido explorada na implementação de soluçÔes seguras de virtualização
ou até mesmo ambientes seguros de execução (TEE). Apesar de ambos os casos
visarem diferentes aplicaçÔes com diferentes requisitos, ambos consistem em consolidar
vĂĄrios ambientes virtuais numa sĂł plataforma e inerentemente necessitam
de partilhar recursos entre os mesmos. Contudo, atualmente, os dispositivos em
system-on-chips (SoC) habilitados com TrustZone podem somente ser configurados
como seguros ou nĂŁo seguros, o que significa que o conceito de duplo ambiente
de execução da TrustZone não estå estendido aos próprios dispositivos. Com este
método de atribuição direta, ambos os mundos não podem utilizar simultaneamente
o mesmo dispositivo a nĂŁo ser que o mesmo seja duplicado, aumentando
significativamente os custos de hardware. Atualmente, os métodos existentes de
acesso a dispositivos partilhados em sistemas com TrustZone demonstram ter um
impacto negativo no sistema em termos de segurança, desempenho e por vezes
requerem um grande esforço de engenharia ou custos de hardware excessivos.
Esta tese propÔe desenvolver o conceito de dispositivos self-secured, um novo
método de acesso a dispositivos partilhados em sistemas com TrustZone. Estes
dispositivos estendem o conceito da TrustZone Ă logica interna dos dispositivos,
dividindo a sua lógica numa interface segura e não segura. A solução implementada
foi integrada no LTZVisor, um hipervisor em cĂłdigo aberto e de baixo overhead
assistido por TrustZone, demonstrando que a segurança do dispositivo partilhado
Ă© assegurada com reduzidos custos de hardware
The Inter-cloud meta-scheduling
Inter-cloud is a recently emerging approach that expands cloud elasticity. By facilitating an adaptable setting, it purposes at the realization of a scalable resource provisioning that enables a diversity of cloud user requirements to be handled efficiently. This studyâs contribution is in the inter-cloud performance optimization of job executions using metascheduling concepts. This includes the development of the inter-cloud meta-scheduling (ICMS) framework, the ICMS optimal schemes and the SimIC toolkit. The ICMS model is an architectural strategy for managing and scheduling user services in virtualized dynamically inter-linked clouds. This is achieved by the development of a model that includes a set of algorithms, namely the Service-Request, Service-Distribution, Service-Availability and Service-Allocation algorithms. These along with resource management optimal schemes offer the novel functionalities of the ICMS where the message exchanging implements the job distributions method, the VM deployment offers the VM management features and the local resource management system details the management of the local cloud schedulers. The generated system offers great flexibility by facilitating a lightweight resource management methodology while at the same time handling the heterogeneity of different clouds through advanced service level agreement coordination. Experimental results are productive as the proposed ICMS model achieves enhancement of the performance of service distribution for a variety of criteria such as service execution times, makespan, turnaround times, utilization levels and energy consumption rates for various inter-cloud entities, e.g. users, hosts and VMs. For example, ICMS optimizes the performance of a non-meta-brokering inter-cloud by 3%, while ICMS with full optimal schemes achieves 9% optimization for the same configurations. The whole experimental platform is implemented into the inter-cloud Simulation toolkit (SimIC) developed by the author, which is a discrete event simulation framework
SimuBoost: Scalable Parallelization of Functional System Simulation
FĂŒr das Sammeln detaillierter Laufzeitinformationen, wie Speicherzugriffsmustern, wird in der Betriebssystem- und Sicherheitsforschung hĂ€ufig auf die funktionale Systemsimulation zurĂŒckgegriffen. Der Simulator fĂŒhrt dabei die zu untersuchende Arbeitslast in einer virtuellen Maschine (VM) aus, indem er schrittweise Instruktionen interpretiert oder derart ĂŒbersetzt, sodass diese auf dem Zustand der VM arbeiten. Dieser Prozess ermöglicht es, eine umfangreiche Instrumentierung durchzufĂŒhren und so an Informationen zum Laufzeitverhalten zu gelangen, die auf einer physischen Maschine nicht zugĂ€nglich sind.
Obwohl die funktionale Systemsimulation als mĂ€chtiges Werkzeug gilt, stellt die durch die Interpretation oder Ăbersetzung resultierende immense AusfĂŒhrungsverlangsamung eine substanzielle EinschrĂ€nkung des Verfahrens dar. Im Vergleich zu einer nativen AusfĂŒhrung messen wir fĂŒr QEMU eine 30-fache Verlangsamung, wobei die Aufzeichnung von Speicherzugriffen diesen Faktor verdoppelt. Mit Simulatoren, die umfangreichere Instrumentierungsmöglichkeiten mitbringen als QEMU, kann die Verlangsamung um eine GröĂenordnung höher ausfallen. Dies macht die funktionale Simulation fĂŒr lang laufende, vernetzte oder interaktive Arbeitslasten uninteressant. DarĂŒber hinaus erzeugt die Verlangsamung ein unrealistisches Zeitverhalten, sobald AktivitĂ€ten auĂerhalb der VM (z. B. Ein-/Ausgabe) involviert sind.
In dieser Arbeit stellen wir SimuBoost vor, eine Methode zur drastischen Beschleunigung funktionaler Systemsimulation. SimuBoost fĂŒhrt die zu untersuchende Arbeitslast zunĂ€chst in einer schnellen hardwaregestĂŒtzten virtuellen Maschine aus. Dies ermöglicht volle InteraktivitĂ€t mit Benutzern und NetzwerkgerĂ€ten. WĂ€hrend der AusfĂŒhrung erstellt SimuBoost periodisch Abbilder der VM (engl. Checkpoints). Diese dienen als Ausgangspunkt fĂŒr eine parallele Simulation, bei der jedes Intervall unabhĂ€ngig simuliert und analysiert wird. Eine heterogene deterministische Wiederholung (engl. heterogeneous deterministic Replay) garantiert, dass in dieser Phase die vorherige hardwaregestĂŒtzte AusfĂŒhrung jedes Intervalls exakt reproduziert wird, einschlieĂlich Interaktionen und realistischem Zeitverhalten.
Unser Prototyp ist in der Lage, die Laufzeit einer funktionalen Systemsimulation deutlich zu reduzieren. WĂ€hrend mit herkömmlichen Verfahren fĂŒr die Simulation des Bauprozesses eines modernen Linux ĂŒber 5 Stunden benötigt werden, schlieĂt SimuBoost die Simulation in nur 15 Minuten ab. Dies sind lediglich 16% mehr Zeit, als der Bau in einer schnellen hardwaregestĂŒtzten VM in Anspruch nimmt. SimuBoost ist imstande, diese Geschwindigkeit auch bei voller Instrumentierung zur Aufzeichnung von Speicherzugriffen beizubehalten.
Die vorliegende Arbeit ist das erste Projekt, welches das Konzept der Partitionierung und Parallelisierung der AusfĂŒhrungszeit auf die interaktive Systemvirtualisierung in einer Weise anwendet, die eine sofortige parallele funktionale Simulation gestattet. Wir ergĂ€nzen die praktische Umsetzung mit einem mathematischen Modell zur formalen Beschreibung der Beschleunigungseigenschaften. Dies erlaubt es, fĂŒr ein gegebenes Szenario die voraussichtliche parallele Simulationszeit zu prognostizieren und gibt eine Orientierung zur Wahl der optimalen IntervalllĂ€nge. Im Gegensatz zu bisherigen Arbeiten legt SimuBoost einen starken Fokus auf die Skalierbarkeit ĂŒber die Grenzen eines einzelnen physischen Systems hinaus. Ein zentraler SchlĂŒssel hierzu ist der Einsatz moderner Checkpointing-Technologien. Im Rahmen dieser Arbeit prĂ€sentieren wir zwei neuartige Methoden zur effizienten und effektiven Kompression von periodischen Systemabbildern
Enhancing Trust âA Unified Meta-Model for Software Security Vulnerability Analysis
Over the last decade, a globalization of the software industry has taken place which has facilitated the sharing and reuse of code across existing project boundaries. At the same time, such global reuse also introduces new challenges to the Software Engineering community, with not only code implementation being shared across systems but also any vulnerabilities it is exposed to as well. Hence, vulnerabilities found in APIs no longer affect only individual projects but instead might spread across projects and even global software ecosystem borders. Tracing such vulnerabilities on a global scale becomes an inherently difficult task, with many of the resources required for the analysis not only growing at unprecedented rates but also being spread across heterogeneous resources. Software developers are struggling to identify and locate the required data to take full advantage of these resources. The Semantic Web and its supporting technology stack have been widely promoted to model, integrate, and support interoperability among heterogeneous data sources.
This dissertation introduces four major contributions to address these challenges: (1) It provides a literature review of the use of software vulnerabilities databases (SVDBs) in the Software Engineering community. (2) Based on findings from this literature review, we present SEVONT, a Semantic Web based modeling approach to support a formal and semi-automated approach for unifying vulnerability information resources. SEVONT introduces a multi-layer knowledge model which not only provides a unified knowledge representation, but also captures software vulnerability information at different abstract levels to allow for seamless integration, analysis, and reuse of the modeled knowledge. The modeling approach takes advantage of Formal Concept Analysis (FCA) to guide knowledge engineers in identifying reusable knowledge concepts and modeling them. (3) A Security Vulnerability Analysis Framework (SV-AF) is introduced, which is an instantiation of the SEVONT knowledge model to support evidence-based vulnerability detection. The framework integrates vulnerability ontologies (and data) with existing Software Engineering ontologies allowing for the use of Semantic Web reasoning services to trace and assess the impact of security vulnerabilities across project boundaries.
Several case studies are presented to illustrate the applicability and flexibility of our modelling approach, demonstrating that the presented knowledge modeling approach cannot only unify heterogeneous vulnerability data sources but also enables new types of vulnerability analysis