46 research outputs found

    Modelling recovery in database systems

    Get PDF
    The execution of modern database applications requires the co-ordination of a number of components such as: the application itself, the DBMS, the operating system, the network and the platform. The interaction of these components makes understanding the overall behaviour of the application a complex task. As a result the effectiveness of optimisations are often difficult to predict. Three techniques commonly available to analyse system behaviour are empirical measurement, simulation-based analysis and analytical modelling. The ideal technique is one that provides accurate results at low cost. This thesis investigates the hypothesis that analytical modelling can be used to study the behaviour of DBMSs with sufficient accuracy. In particular the work focuses on a new model for costing recovery mechanisms called MaStA and determines if the model can be used effectively to guide the selection of mechanisms. To verify the effectiveness of the model a validation framework is developed. Database workloads are executed on the flexible Flask architecture on different platforms. Flask is designed to minimise the dependencies between DBMS components and is used in the framework to allow the same workloads to be executed on a various recovery mechanisms. Empirical analysis of executing the workloads is used to validate the assumptions about CPU, I/O and workload that underlie MaStA. Once validated, the utility of the model is illustrated by using it to select the mechanisms that provide optimum performance for given database applications. By showing that analytical modelling can be used in the selection of recovery mechanisms, the work presented makes a contribution towards a database architecture in which the implementation of all components may be selected to provide optimum performance

    A Generic Storage API

    Get PDF
    We present a generic API suitable for provision of highly generic storage facilities that can be tailored to produce various individually customised storage infrastructures. The paper identifies a candidate set of minimal storage system building blocks, which are sufficiently simple to avoid encapsulating policy where it cannot be customised by applications, and composable to build highly flexible storage architectures. Four main generic components are defined: the store, the namer, the caster and the interpreter. It is hypothesised that these are sufficiently general that they could act as building blocks for any information storage and retrieval system. The essential characteristics of each are defined by an interface, which may be implemented by multiple implementing classes.Comment: Submitted to ACSC 200

    Thin Hypervisor-Based Security Architectures for Embedded Platforms

    Get PDF
    Virtualization has grown increasingly popular, thanks to its benefits of isolation, management, and utilization, supported by hardware advances. It is also receiving attention for its potential to support security, through hypervisor-based services and advanced protections supplied to guests. Today, virtualization is even making inroads in the embedded space, and embedded systems, with their security needs, have already started to benefit from virtualization’s security potential. In this thesis, we investigate the possibilities for thin hypervisor-based security on embedded platforms. In addition to significant background study, we present implementation of a low-footprint, thin hypervisor capable of providing security protections to a single FreeRTOS guest kernel on ARM. Backed by performance test results, our hypervisor provides security to a formerly unsecured kernel with minimal performance overhead, and represents a first step in a greater research effort into the security advantages and possibilities of embedded thin hypervisors. Our results show that thin hypervisors are both possible and beneficial even on limited embedded systems, and sets the stage for more advanced investigations, implementations, and security applications in the future

    Distributed Shared Memory based Live VM Migration

    Get PDF
    Cloud computing is the new trend in computing services and IT industry, this computing paradigm has numerous benefits to utilize IT infrastructure resources and reduce services cost. The key feature of cloud computing depends on mobility and scalability of the computing resources, by managing virtual machines. The virtualization decouples the software from the hardware and manages the software and hardware resources in an easy way without interruption of services. Live virtual machine migration is an essential tool for dynamic resource management in current data centers. Live virtual machine is defined as the process of moving a running virtual machine or application between different physical machines without disconnecting the client or application. Many techniques have been developed to achieve this goal based on several metrics (total migration time, downtime, size of data sent and application performance) that are used to measure the performance of live migration. These metrics measure the quality of the VM services that clients care about, because the main goal of clients is keeping the applications performance with minimum service interruption. The pre-copy live VM migration is done in four phases: preparation, iterative migration, stop and copy, and resume and commitment. During the preparation phase, the source and destination physical servers are selected, the resources in destination physical server are reserved, and the critical VM is selected to be migrated. The cloud manager responsibility is to make all of these decisions. VM state migration takes place and memory state is transferred to the target node during iterative migration phase. Meanwhile, the migrated VM continues to execute and dirties its memory. In the stop and copy phase, VM virtual CPU is stopped and then the processor and network states are transferred to the destination host. Service downtime results from stopping VM execution and moving the VM CPU and network states. Finally in the resume and commitment phase, the migrated VM is resumed running in the destination physical host, the remaining memory pages are pulled by destination machine from the source machine. The source machine resources are released and eliminated. In this thesis, pre-copy live VM migration using Distributed Shared Memory (DSM) computing model is proposed. The setup is built using two identical computation nodes to construct all the proposed environment services architecture namely the virtualization infrastructure (Xenserver6.2 hypervisor), the shared storage server (the network file system), and the DSM and High Performance Computing (HPC) cluster. The custom DSM framework is based on a low latency memory update named Grappa. Moreover, HPC cluster is used to parallelize the work load by using CPUs computation nodes. HPC cluster employs OPENMPI and MPI libraries to support parallelization and auto-parallelization. The DSM allows the cluster CPUs to access the same memory space pages resulting in less memory data updates, which reduces the amount of data transferred through the network. The thesis proposed model achieves a good enhancement of the live VM migration metrics. Downtime is reduced by 50 % in the idle workload of Windows VM and 66.6% in case of Ubuntu Linux idle workload. In general, the proposed model not only reduces the downtime and the total amount of data sent, but also does not degrade other metrics like the total migration time and the applications performance

    Dynamic Isolated Domains

    Get PDF
    Operating System-level Virtualization is virtualization technology based on running multiple isolated userspace instances, commonly referred to as containers, on top of a single operating system kernel. The fundamental difference compared to traditional virtualization is that the targets of virtualization in OS-level virtualization are kernel resources, not hardware. OS-level virtualization is used to implement Bring Your Own Device (BYOD) policies on contemporary mobile platforms. Current commercial BYOD solutions, however, don't allow for applications to be containerized dynamically upon user request. The ability to do so would greatly improve the flexibility and usability of such schemes. In this work we study if existing OS-level virtualization features in the Linux kernel can meet the needs of use cases reliant on such dynamic isolation. We present the design and implementation of a prototype which allows applications in dynamic isolated domains to be migrated from one device to another. Our design fits together with security features in the Linux kernel, allowing the security policy influenced by user decisions to be migrated along with the application. The deployability of the design is improved by basing the solution on functionality already available in the mainline Linux kernel. Our evaluation shows that the OS-level virtualization features in the Linux kernel indeed allow applications to be isolated in a dynamic fashion, although known gaps in the compartmentalization of kernel resources require trade-offs between the security and interoperability to be made in the design of such containers

    Management of Long-Running High-Performance Persistent Object Stores

    Get PDF
    The popularity of object-oriented programming languages, such as Java and C++, for large application development has stirred an interest in improved technologies for high-performance, reliable, and scalable object storage. Such storage systems are typically referred to as Persistent Object Stores. This thesis describes the design and implementation of Sphere, a new persistent object store developed at the University of Glasgow, Scotland. The requirements for Sphere included high performance, support for transactional multi-threaded loads, scalability, extensibility, portability, reliability, referential integrity via the use of disk garbage collection, provision for flexible schema evolution, and minimised interaction with the mutator. The Sphere architecture is split into two parts: the core and the application-specific customisations. The core was designed to be modular, in order to encourage research and experimentation, and to be as light-weight as possible, in an attempt to achieve high performance through simplicity. The customisation part includes the code that deals with and is optimised for the specific load of the application that Sphere has to support: object formats, free-space management, etc. Even though specialising this part of the store is not trivial, it has the benefit that the interaction between the mutator and Sphere is direct and more efficient, as translation layers are not necessary. Major design decisions for Sphere included (i) splitting the store into partitions, to facilitate incremental disk garbage collection and schema evolution, (ii) using a flexible two-level free-space management, (Hi) introducing a three-dimensional method-dispatch matrix to invoke store operations, which contributes to Sphere's ease-of-extensibility, (iv) adopting a logical addressing scheme, to allow straightforward object and partition relocation, (v) requiring that Sphere can identify reference fields inside objects, so that it does not have to interact with the mutator in order to do so, and (vi) adopting the well-known ARIES recovery algorithm to ensure fault-tolerance. The thesis contains a detailed overview of Sphere and the context in which it was developed. Then, it concentrates on two areas that were explored using Sphere as the implementation platform. First, bulk object-loading issues are discussed and the Ghosted Allocation promotion algorithm is described. This algorithm was designed to allocate large numbers of objects to a store efficiently and with minimal log traffic and was evaluated using large-scale experiments. Second, the disk garbage collection framework of Sphere is overviewed and the implemented compacting, relocating garbage collector is described, along with the model of synchronisation with the mutator

    Architecture de sécurité de bout en bout et mécanismes d'autoprotection pour les environnements Cloud

    Get PDF
    Since several years the virtualization of infrastructures became one of the major research challenges, consuming less energy while delivering new services. However, many attacks hinder the global adoption of Cloud computing. Self-protection has recently raised growing interest as possible element of answer to the cloud computing infrastructure protection challenge. Yet, previous solutions fall at the last hurdle as they overlook key features of the cloud, by lack of flexible security policies, cross-layered defense, multiple control granularities, and open security architectures. This thesis presents VESPA, a self-protection architecture for cloud infrastructures. Flexible coordination between self-protection loops allows enforcing a rich spectrum of security strategies. A multi-plane extensible architecture also enables simple integration of commodity security components.Recently, some of the most powerful attacks against cloud computing infrastructures target the Virtual Machine Monitor (VMM). In many case, the main attack vector is a poorly confined device driver. Current architectures offer no protection against such attacks. This thesis proposes an altogether different approach by presenting KungFuVisor, derived from VESPA, a framework to build self-defending hypervisors. The result is a very flexible self-protection architecture, enabling to enforce dynamically a rich spectrum of remediation actions over different parts of the VMM, also facilitating defense strategy administration. We showed the application to three different protection scheme: virus infection, mobile clouds and hypervisor drivers. Indeed VESPA can enhance cloud infrastructure securityLa virtualisation des infrastructures est devenue un des enjeux majeurs dans la recherche, qui fournissent des consommations d'Ă©nergie moindres et des nouvelles opportunitĂ©s. Face Ă  de multiples menaces et des mĂ©canismes de dĂ©fense hĂ©tĂ©rogĂšnes, l'approche autonomique propose une gestion simplifiĂ©e, robuste et plus efficace de la sĂ©curitĂ© du cloud. Aujourd'hui, les solutions existantes s'adaptent difficilement. Il manque des politiques de sĂ©curitĂ© flexibles, une dĂ©fense multi-niveaux, des contrĂŽles Ă  granularitĂ© variable, ou encore une architecture de sĂ©curitĂ© ouverte. Ce mĂ©moire prĂ©sente VESPA, une architecture d'autoprotection pour les infrastructures cloud. VESPA est construit autour de politiques qui peuvent rĂ©guler la sĂ©curitĂ© Ă  plusieurs niveaux. La coordination flexible entre les boucles d'autoprotection rĂ©alise un large spectre de stratĂ©gies de sĂ©curitĂ© comme des dĂ©tections et des rĂ©actions sur plusieurs niveaux. Une architecture extensible multi plans permet d'intĂ©grer simplement des Ă©lĂ©ments dĂ©jĂ  prĂ©sents. Depuis peu, les attaques les plus critiques contre les infrastructures cloud visent la brique la plus sensible: l'hyperviseur. Le vecteur d'attaque principal est un pilote de pĂ©riphĂ©rique mal confinĂ©. Les mĂ©canismes de dĂ©fense mis en jeu sont statiques et difficile Ă  gĂ©rer. Nous proposons une approche diffĂ©rente avec KungFuVisor, un canevas logiciel pour crĂ©er des hyperviseurs autoprotĂ©gĂ©s spĂ©cialisant l'architecture VESPA. Nous avons montrĂ© son application Ă  trois types de protection diffĂ©rents : les attaques virales, la gestion hĂ©tĂ©rogĂšne multi-domaines et l'hyperviseur. Ainsi la sĂ©curitĂ© des infrastructures cloud peut ĂȘtre amĂ©liorĂ©e grĂące Ă  VESP

    Securing emerging IoT systems through systematic analysis and design

    Get PDF
    The Internet of Things (IoT) is growing very rapidly. A variety of IoT systems have been developed and employed in many domains such as smart home, smart city and industrial control, providing great benefits to our everyday lives. However, as IoT becomes increasingly prevalent and complicated, it is also introducing new attack surfaces and security challenges. We are seeing numerous IoT attacks exploiting the vulnerabilities in IoT systems everyday. Security vulnerabilities may manifest at different layers of the IoT stack. There is no single security solution that can work for the whole ecosystem. In this dissertation, we explore the limitations of emerging IoT systems at different layers and develop techniques and systems to make them more secure. More specifically, we focus on three of the most important layers: the user rule layer, the application layer and the device layer. First, on the user rule layer, we characterize the potential vulnerabilities introduced by the interaction of user-defined automation rules. We introduce iRuler, a static analysis system that uses model checking to detect inter-rule vulnerabilities that exist within trigger-action platforms such as IFTTT in an IoT deployment. Second, on the application layer, we design and build ProvThings, a system that instruments IoT apps to generate data provenance that provides a holistic explanation of system activities, including malicious behaviors. Lastly, on the device layer, we develop ProvDetector and SplitBrain to detect malicious processes using kernel-level provenance tracking and analysis. ProvDetector is a centralized approach that collects all the audit data from the clients and performs detection on the server. SplitBrain extends ProvDetector with collaborative learning, where the clients collaboratively build the detection model and performs detection on the client device

    Small TCBs of policy-controlled operating systems

    Get PDF
    ï»żIT Systeme mit qualitativ hohen Sicherheitsanforderungen verwenden zur Beschreibung, Analyse und Implementierung ihrer Sicherheitseigenschaften zunehmend problemspezifische Sicherheitspolitiken, welche ein wesentlicher Bestandteil der Trusted Computing Base (TCB) eines IT Systems sind. Aus diesem Grund sind die Korrektheit und Unumgehbarkeit der Implementierung einer TCB entscheidend, um die geforderten Sicherheitseigenschaften eines Systems herzustellen, zu wahren und zu garantieren. Viele der heutigen Betriebssysteme zeigen, welche Herausforderung die Realisierung von Sicherheitspolitiken darstellt; seit mehr als 40 Jahren unterstĂŒtzen sie wahlfreie identitĂ€tsbasierte Zugriffssteuerungspolitiken nur rudimentĂ€r. Dies fĂŒhrt dazu, dass große Teile der Sicherheitspolitiken von Anwendersoftware durch die Anwendungen selbst implementiert werden. Infolge dessen sind die TCBs heutiger Betriebssysteme groß, heterogen und verteilt, so dass die exakte Bestimmung ihres Funktionsumfangs sehr aufwendig ist. Im Ergebnis sind die wesentlichen Eigenschaften von TCBs - Korrektheit, Robustheit und Unumgehbarkeit - nur schwer erreichbar. Dies hat zur Entwicklung von Politik gesteuerten Betriebssystemen gefĂŒhrt, die alle Sicherheitspolitiken eines Betriebssystems und seiner Anwendungen zentral zusammenfassen, indem sie Kernabstraktionen fĂŒr Sicherheitspolitiken und Politiklaufzeitumgebungen anbieten. Aktuelle Politik gesteuerte Betriebssysteme basieren auf monolithischen Architekturen, was dazu fĂŒhrt, dass ihre Komponenten zur Durchsetzung ihrer Politiken im Betriebssystemkern verteilt sind. Weiterhin verfolgen sie das Ziel, ein möglichst breites Spektrum an Sicherheitspolitiken zu unterstĂŒtzen. Dies hat zur Folge, dass ihre Laufzeitkomponenten fĂŒr Politikentscheidung und -durchsetzung universal sind. Im Ergebnis sind ihre TCB-Implementierungen groß und komplex, so dass der TCB- Funktionsumfang nur schwer identifiziert werden kann und wesentliche Eigenschaften von TCBs nur mit erhöhtem Aufwand erreichbar sind. Diese Dissertation verfolgt einen Ansatz, der die TCBs Politik gesteuerter Betriebssysteme systematisch entwickelt. Die Idee ist, das Laufzeitsystem fĂŒr Sicherheitspolitiken so maßzuschneidern, dass nur die Politiken unterstĂŒtzt werden, die tatsĂ€chlich in einer TCB vorhanden sind. Dabei wird der Funktionsumfang einer TCB durch kausale AbhĂ€ngigkeiten zwischen Sicherheitspolitiken und TCB-Funktionen bestimmt. Das Ergebnis sind kausale TCBs, die nur diejenigen Funktionen enthalten, die zum Durchsetzen und zum Schutz der vorhandenen Sicherheitspolitiken notwendig sind. Die prĂ€zise Identifikation von TCB-Funktionen erlaubt, die Implementierung der TCB-Funktionen von nicht-vertrauenswĂŒrdigen Systemkomponenten zu isolieren. Dadurch legen kausale TCBs die Grundlage fĂŒr TCB-Implementierungen, deren GrĂ¶ĂŸe und KomplexitĂ€t eine Analyse und Verifikation bezĂŒglich ihrer Korrektheit und Unumgehbarkeit ermöglichen. Kausale TCBs haben ein breites Anwendungsspektrum - von eingebetteten Systemen ĂŒber Politik gesteuerte Betriebssysteme bis hin zu Datenbankmanagementsystemen in großen Informationssystemen.Policy-controlled operating systems provide a policy decision and enforcement environment to protect and enforce their security policies. The trusted computing base (TCB) of these systems are large and complex, and their functional perimeter can hardly be precisely identified. As a result, a TCB's correctness and tamper-proofness are hard to ensure in its implementation. This dissertation develops a TCB engineering method for policy-controlled operating systems that tailors the policy decision and enforcement environment to support only those policies that are actually present in a TCB. A TCB's functional perimeter is identified by exploiting causal dependencies between policies and TCB functions, which results in causal TCBs that contain exactly those functions that are necessary to establish, enforce, and protect their policies. The precise identification of a TCB's functional perimeter allows for implementing a TCB in a safe environment that indeed can be isolated from untrusted system components. Thereby, causal TCB engineering sets the course for implementations whose size and complexity pave the way for analyzing and verifying a TCB's correctness and tamper-proofness.Auch im Buchhandel erhĂ€ltlich: Small TCBs of policy-controlled operating systems / Anja Pölck Ilmenau : Univ.-Verl. Ilmenau, 2014. - xiii, 249 S. ISBN 978-3-86360-090-7 Preis: 24,40
    corecore