93 research outputs found

    Glider: A GPU Library Driver for Improved System Security

    Full text link
    Legacy device drivers implement both device resource management and isolation. This results in a large code base with a wide high-level interface making the driver vulnerable to security attacks. This is particularly problematic for increasingly popular accelerators like GPUs that have large, complex drivers. We solve this problem with library drivers, a new driver architecture. A library driver implements resource management as an untrusted library in the application process address space, and implements isolation as a kernel module that is smaller and has a narrower lower-level interface (i.e., closer to hardware) than a legacy driver. We articulate a set of device and platform hardware properties that are required to retrofit a legacy driver into a library driver. To demonstrate the feasibility and superiority of library drivers, we present Glider, a library driver implementation for two GPUs of popular brands, Radeon and Intel. Glider reduces the TCB size and attack surface by about 35% and 84% respectively for a Radeon HD 6450 GPU and by about 38% and 90% respectively for an Intel Ivy Bridge GPU. Moreover, it incurs no performance cost. Indeed, Glider outperforms a legacy driver for applications requiring intensive interactions with the device driver, such as applications using the OpenGL immediate mode API

    Hardware IPC for a TrustZone-assisted Hypervisor

    Get PDF
    Dissertação de mestrado em Engenharia Eletrónica Industrial e ComputadoresIn this modern era ruled by technology and the IoT (Internet of Things), embedded systems have an ubiquitous presence in our daily lives. Although they do differ from each other in their functionalities and end-purpose, they all share the same basic requirements: safety and security. Whether in a non-critical system such as a smartphone, or a critical one, like an electronic control unit of any modern vehicle, these requirements must always be fulfilled in order to accomplish a reliable and trust-worthy system. One well-established technology to address this problem is virtualization. It provides isolation by encapsulating each subsystem in separate Virtual-Machines (VMs), while also enabling the sharing of hardware resources. However, these isolated subsystems may still need to communicate with each other. Inter-Process Communication is present in most OSes’ stacks, representing a crucial part of it, which allows, through a myriad of different mechanisms, communication be- tween tasks. In a virtualized system, Inter-Partition Communication mechanisms implement the communication between the different subsystems referenced above. TrustZone technology has been in the forefront of hardware-assisted security and it has been explored for virtualization purposes, since natively it provides sep- aration between two execution worlds while enforcing, by design, different privi- lege to these execution worlds. LTZVisor, an open-source lightweight TrustZone- assisted hypervisor, emerged as a way of providing a platform for exploring how TrustZone can be exploited to assist virtualization. Its IPC mechanism, TZ- VirtIO, constitutes a standard virtual I/O approach for achieving communication between the OSes, but some overhead is caused by the introduction of the mech- anism. Hardware-based solutions are yet to be explored with this solution, which could bring performance and security benefits while diminishing overhead. Attending the reasons mentioned above, hTZ-VirtIO was developed as a way to explore the offloading of the software-based communication mechanism of the LTZVisor to hardware-based mechanisms.Atualmente, onde a tecnologia e a Internet das Coisas (IoT) dominam a so- ciedade, os sistemas embebidos são omnipresentes no nosso dia-a-dia, e embora possam diferir entre as funcionalidades e objetivos finais, todos partilham os mes- mos requisitos básicos. Seja um sistema não crítico, como um smartphone, ou um sistema crítico, como uma unidade de controlo de um veículo moderno, estes requisitos devem ser cumpridos de maneira a se obter um sistema confiável. Uma tecnologia bem estabelecida para resolver este problema é a virtualiza- ção. Esta abordagem providencia isolamento através do encapsulamento de sub- sistemas em máquinas virtuais separadas, além de permitir a partilha de recursos de hardware. No entanto, estes subsistemas isolados podem ter a necessidade de comunicar entre si. Comunicação entre tarefas está presente na maioria das pilhas de software de qualquer sistema e representa uma parte crucial dos mesmos. Num sistema virtualizado, os mecanismos de comunicação entre-partições implementam a comunicação entre os diferentes subsistemas mencionados acima. A tecnologia TrustZone tem estado na vanguarda da segurança assistida por hardware, e tem sido explorada na implementação de sistemas virtualizados, visto que permite nativamente a separação entre dois mundos de execução, e impondo ao mesmo tempo, por design, privilégios diferentes a esses mundos de execução. O LTZVisor, um hypervisor em código-aberto de baixo overhead assistido por Trust- Zone, surgiu como uma forma de fornecer uma plataforma que permite a explo- ração da TrustZone como tecnologia de assistência a virtualização. O TZ-VirtIO, mecanismo de comunicação do LTZVisor, constitui uma abordagem padrão de E/S virtuais, para permitir comunicação entre os sistemas operativos. No entanto, a introdução deste mecanismo provoca sobrecarga sobre o hypervisor. Soluções baseadas em hardware para o TZ-VirtIO ainda não foram exploradas, e podem trazer benefícios de desempenho e segurança, e diminuir a sobrecarga. Atendendo às razões mencionadas acima, o hTZ-VirtIO foi desenvolvido como uma maneira de explorar a migração do mecanismo de comunicação baseado em software do LTZVisor para mecanismos baseados em hardware

    The design and implementation of a prototype exokernel operating system

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.Includes bibliographical references (p. 99-106).by Dawson R. Engler.M.S

    Scalability of microkernel-based systems

    Get PDF

    Tags: Augmenting Microkernel Messages with Lightweight Metadata

    Get PDF
    In this work, we propose Tags, an e cient mechanism that augments microkernel interprocess messages with lightweight metadata to enable the development of new, systemwide functionality without requiring the modi cation of application source code. Therefore, the technology is well suited for systems with a large legacy code base and for third-party applications such as phone and tablet applications. As examples, we detailed use cases in areas consisting of mandatory security and runtime veri cation of process interactions. In the area of mandatory security, we use tagging to assess the feasibility of implementing a mandatory integrity propagation model in the microkernel. The process interaction veri cation use case shows the utility of tagging to track and verify interaction history among system components. To demonstrate that tagging is technically feasible and practical, we implemented it in a commercial microkernel and executed multiple sets of standard benchmarks on two di erent computing architectures. The results clearly demonstrate that tagging has only negligible overhead and strong potential for many applications

    Error management in ATLAS TDAQ : an intelligent systems approach

    Get PDF
    This thesis is concerned with the use of intelligent system techniques (IST) within a large distributed software system, specifically the ATLAS TDAQ system which has been developed and is currently in use at the European Laboratory for Particle Physics(CERN). The overall aim is to investigate and evaluate a range of ITS techniques in order to improve the error management system (EMS) currently used within the TDAQ system via error detection and classification. The thesis work will provide a reference for future research and development of such methods in the TDAQ system. The thesis begins by describing the TDAQ system and the existing EMS, with a focus on the underlying expert system approach, in order to identify areas where improvements can be made using IST techniques. It then discusses measures of evaluating error detection and classification techniques and the factors specific to the TDAQ system. Error conditions are then simulated in a controlled manner using an experimental setup and datasets were gathered from two different sources. Analysis and processing of the datasets using statistical and ITS techniques shows that clusters exists in the data corresponding to the different simulated errors. Different ITS techniques are applied to the gathered datasets in order to realise an error detection model. These techniques include Artificial Neural Networks (ANNs), Support Vector Machines (SVMs) and Cartesian Genetic Programming (CGP) and a comparison of the respective advantages and disadvantages is made. The principle conclusions from this work are that IST can be successfully used to detect errors in the ATLAS TDAQ system and thus can provide a tool to improve the overall error management system. It is of particular importance that the IST can be used without having a detailed knowledge of the system, as the ATLAS TDAQ is too complex for a single person to have complete understanding of. The results of this research will benefit researchers developing and evaluating IST techniques in similar large scale distributed systems

    Modern data analytics in the cloud era

    Get PDF
    Cloud Computing ist die dominante Technologie des letzten Jahrzehnts. Die Benutzerfreundlichkeit der verwalteten Umgebung in Kombination mit einer nahezu unbegrenzten Menge an Ressourcen und einem nutzungsabhängigen Preismodell ermöglicht eine schnelle und kosteneffiziente Projektrealisierung für ein breites Nutzerspektrum. Cloud Computing verändert auch die Art und Weise wie Software entwickelt, bereitgestellt und genutzt wird. Diese Arbeit konzentriert sich auf Datenbanksysteme, die in der Cloud-Umgebung eingesetzt werden. Wir identifizieren drei Hauptinteraktionspunkte der Datenbank-Engine mit der Umgebung, die veränderte Anforderungen im Vergleich zu traditionellen On-Premise-Data-Warehouse-Lösungen aufweisen. Der erste Interaktionspunkt ist die Interaktion mit elastischen Ressourcen. Systeme in der Cloud sollten Elastizität unterstützen, um den Lastanforderungen zu entsprechen und dabei kosteneffizient zu sein. Wir stellen einen elastischen Skalierungsmechanismus für verteilte Datenbank-Engines vor, kombiniert mit einem Partitionsmanager, der einen Lastausgleich bietet und gleichzeitig die Neuzuweisung von Partitionen im Falle einer elastischen Skalierung minimiert. Darüber hinaus führen wir eine Strategie zum initialen Befüllen von Puffern ein, die es ermöglicht, skalierte Ressourcen unmittelbar nach der Skalierung auszunutzen. Cloudbasierte Systeme sind von fast überall aus zugänglich und verfügbar. Daten werden häufig von zahlreichen Endpunkten aus eingespeist, was sich von ETL-Pipelines in einer herkömmlichen Data-Warehouse-Lösung unterscheidet. Viele Benutzer verzichten auf die Definition von strikten Schemaanforderungen, um Transaktionsabbrüche aufgrund von Konflikten zu vermeiden oder um den Ladeprozess von Daten zu beschleunigen. Wir führen das Konzept der PatchIndexe ein, die die Definition von unscharfen Constraints ermöglichen. PatchIndexe verwalten Ausnahmen zu diesen Constraints, machen sie für die Optimierung und Ausführung von Anfragen nutzbar und bieten effiziente Unterstützung bei Datenaktualisierungen. Das Konzept kann auf beliebige Constraints angewendet werden und wir geben Beispiele für unscharfe Eindeutigkeits- und Sortierconstraints. Darüber hinaus zeigen wir, wie PatchIndexe genutzt werden können, um fortgeschrittene Constraints wie eine unscharfe Multi-Key-Partitionierung zu definieren, die eine robuste Anfrageperformance bei Workloads mit unterschiedlichen Partitionsanforderungen bietet. Der dritte Interaktionspunkt ist die Nutzerinteraktion. Datengetriebene Anwendungen haben sich in den letzten Jahren verändert. Neben den traditionellen SQL-Anfragen für Business Intelligence sind heute auch datenwissenschaftliche Anwendungen von großer Bedeutung. In diesen Fällen fungiert das Datenbanksystem oft nur als Datenlieferant, während der Rechenaufwand in dedizierten Data-Science- oder Machine-Learning-Umgebungen stattfindet. Wir verfolgen das Ziel, fortgeschrittene Analysen in Richtung der Datenbank-Engine zu verlagern und stellen das Grizzly-Framework als DataFrame-zu-SQL-Transpiler vor. Auf dieser Grundlage identifizieren wir benutzerdefinierte Funktionen (UDFs) und maschinelles Lernen (ML) als wichtige Aufgaben, die von einer tieferen Integration in die Datenbank-Engine profitieren würden. Daher untersuchen und bewerten wir Ansätze für die datenbankinterne Ausführung von Python-UDFs und datenbankinterne ML-Inferenz.Cloud computing has been the groundbreaking technology of the last decade. The ease-of-use of the managed environment in combination with nearly infinite amount of resources and a pay-per-use price model enables fast and cost-efficient project realization for a broad range of users. Cloud computing also changes the way software is designed, deployed and used. This thesis focuses on database systems deployed in the cloud environment. We identify three major interaction points of the database engine with the environment that show changed requirements compared to traditional on-premise data warehouse solutions. First, software is deployed on elastic resources. Consequently, systems should support elasticity in order to match workload requirements and be cost-effective. We present an elastic scaling mechanism for distributed database engines, combined with a partition manager that provides load balancing while minimizing partition reassignments in the case of elastic scaling. Furthermore we introduce a buffer pre-heating strategy that allows to mitigate a cold start after scaling and leads to an immediate performance benefit using scaling. Second, cloud based systems are accessible and available from nearly everywhere. Consequently, data is frequently ingested from numerous endpoints, which differs from bulk loads or ETL pipelines in a traditional data warehouse solution. Many users do not define database constraints in order to avoid transaction aborts due to conflicts or to speed up data ingestion. To mitigate this issue we introduce the concept of PatchIndexes, which allow the definition of approximate constraints. PatchIndexes maintain exceptions to constraints, make them usable in query optimization and execution and offer efficient update support. The concept can be applied to arbitrary constraints and we provide examples of approximate uniqueness and approximate sorting constraints. Moreover, we show how PatchIndexes can be exploited to define advanced constraints like an approximate multi-key partitioning, which offers robust query performance over workloads with different partition key requirements. Third, data-centric workloads changed over the last decade. Besides traditional SQL workloads for business intelligence, data science workloads are of significant importance nowadays. For these cases the database system might only act as data delivery, while the computational effort takes place in data science or machine learning (ML) environments. As this workflow has several drawbacks, we follow the goal of pushing advanced analytics towards the database engine and introduce the Grizzly framework as a DataFrame-to-SQL transpiler. Based on this we identify user-defined functions (UDFs) and machine learning inference as important tasks that would benefit from a deeper engine integration and investigate approaches to push these operations towards the database engine

    DEVELOPMENT OF DISTRIBUTED REAL-TIME ENVIRONMENTAL MEASURING SYSTEM

    Full text link
    V magistrskem delu so analizirane ter izpostavljene najbolj pomembne karakteristike optimalnega merilnega sistema za spremljanje stanja okolja v realnem času. Predstavljena je tehnična implementacija porazdeljenega merilnega sistema DEMS (ang. Distributed Environmental Measuring System) z X-DEG (ang. Distributed Environmental Gateway) visoko zmogljivo komunikacijsko platformo, kot osnovnim gradnikom. Sistem je primarno namenjen kontinuiranemu spremljanju stanja okolja ter zagotavljanju in posredovanju kakovostnih okoljskih podatkov v realnem času. V uvodu je predstavljena vloga Agencije za okolje (v nadaljevanju ARSO), kot odgovorne institucije za zagotavljanje in posredovanje kakovostnih okoljskih podatkov v realnem času. Predstavljeni so cilji in smernice ter zahteve, ki so upoštevane pri načrtovanju in razvoju optimalnega merilnega DEMS in X-DEG platforme. Na kratko so predstavljeni posamezni segmenti avtomatske merilne mreže ARSO (meteorološka, hidrološka in kakovost zraka) ter merilne veličine, ki jih kontinuirano spremljamo v okviru posameznih monitoringov. V nadaljevanju se osredotočamo na predstavitev implementacije DEMS kot omrežja z integriranimi virtualiziranimi podpornimi strežniki za upravljanje s porazdeljenimi X-DEG merilnimi sistemi ter sledljivo obvladovanje programske in strojne opreme ter dokumentacije (Redmine, SVN - Software Versioning And Revision Control System, Wiki, Vizualizacijski strežnik). V četrtem poglavju je predstavljena zasnova X-DEG visoko zmogljive komunikacijske platforme ter porazdeljen modularni koncept strojne in programske opreme. Funkcionalnost je bazirana na vgrajenem Linux operacijskem sistemu, na katerem tečejo s stališča uporabnika neodvisna senzorska jedra, ki jih imenujemo sensord (ang. Sensor daemon). Vsak sensord se lahko povezuje s fizičnim UART ali pa TCP/IP komunikacijskim kanalom, preko katerega upravlja priključeno senzoriko in zajema podatke. Sensord, kot element gradnje merilnega sistema, zagotavlja načrtovanje hierarhične drevesne in vozliščno modularne strukture. Nadzorni strežniški vmesnik oziroma JMD modul (ang. Job Manager Daemon) deluje kot vhodna vrata (ang. gateway) za koordinacijo in izmenjavo podatkov, nadzor nad delovanjem posameznih sensord, zagotavlja pa tudi arhiviranje in periodičen prenos podatkov v center. Opisane so karakteristike optimalnega merilnega sistema, ki jih v veliki meri določajo prav vgrajeni programski servisi in funkcionalnosti, ki jih želimo implementirati na nivoju komunikacijske platforme X-DEG. V cilju daljinskega upravljanja z merilnimi mrežami in podatki mora biti le-ta načrtovana z ustreznimi vmesniki in podprta z ustreznimi servisi. Predstavljena je izvedba standardiziranega uporabniškega ukazno vrstičnega TCLI (ang. Telnet Command Line Interpreter) vmesnika. V nadaljevanju je prikazan koncept xml (ang. Extended markup language) strukture kot mehanizem za porazdeljeno gradnjo in funkcionalno upravljanje z aplikativno programsko opremo (sensord in JMD) ter kot osnova za hierarhično drevesno in modularno vozliščno strukturo delovanja X-DEG merilnega sistema. Prikazan je način gradnje merilnega sistema za spremljanje stanja okolja na nivoju povezovanja senzorike in instrumentov ter integracije v merilno mrežo. V petem poglavju je predstavljena struktura podatkovnega toka, ki se izvaja na nivoju sensord modulov ter algoritmi za intervalno procesiranje podatkov. Prikazan je primer za zvezne veličine, kot sta temperatura in relativna vlažnost zraka ter primer obdelave količine padavin. V zaključku so podani doseženi rezultati pri implementaciji DEMS in X-DEG kot visoko zmogljivega merilnega sistema z vidika zastavljenih ciljev in zahtev. Merilni sistemi so preizkušeni in operativno delujoči na 285 lokacijah hidrološke in meteorološke nacionalne merilne mreže ARSO. Prav tako je bil sistem preizkušen in nameščen na nekaterih merilnih mestih, kjer poteka monitoring kakovosti zunanjega zraka. Produkcija vseh že nameščenih merilnih sistemov DEMS / X-DEG je bila izvedena v laboratoriju ARSO.Master\u27s thesis analyses and highlights the most important characteristics of the optimal real time environmental measuring system. Technical implementation of distributed measuring system DEMS (Distributed Environmental Measuring System) with X-DEG (Distributed Environmental Gateway), as a basic gateway is shown. System is primarily built for continuous environmental monitoring and providing and delivering high-quality environmental data in real time. The introduction presents the role of the Slovenian Environment Agency (SEA), as the institution responsible for providing and delivering high-quality environmental data in real time. Also objectives and guidelines as well as requirements that were followed in designing and developing the optimal measuring system DEMS and the X-DEG platform are presented. Separate segments of SEA measuring network (meteorological, hydrological and air quality) with basic features and implemented measuring quantities are shortly introduced. The following chapter presents the implemented DEMS network with integrated supportive virtual servers for managing distributed X-DEG measuring systems and traceable software, hardware and documentation management (Redmine, SVN - Software Versioning And Revision Control System, Wiki, Visualization server). Fourth chapter presents the design of the X-DEG high performance communication platform with distributed modular concept of hardware and software. Functionality is based on embedded Linux operating system where, from the user point of view, sensor kernels, called sensord modules are running independently. Each sensord can be associated with physical UART or TCP/IP communication channel. It manages the connected sensors and performs data acquisition. Sensord as measuring system constitutive element ensures design of hierarchic tree and node modular structure. Job manager daemon called JMD module is used as gateway for coordination and data dissemination, supervising sensord performance. It also assures data archiving and periodical data transmission to remote data center. The characteristics of the optimal measuring system that are strongly related with built in software services and functionalities that are implemented at the level of X-DEG communication platform are described. In the goal to achieve remote measuring networks and data management the platform should be designed with appropriate interfaces and supported by appropriate services. Furthermore, the implementation of standard user telnet command line interpreter TCLI is described. The xml configuration of the application software (sensord and JMD modules) as a basis of structure hierarchy and functional operation of measuring system is presented. The implementation of environmental measuring system at the level of sensors and instruments connectivity and integration into measuring network is shown. The data flow specification at the level of sensord module is presented in chapter 5. Some algorithms for basic continuous meteorological quantities, such as temperature or relative humidity, and example of interval data processing for precipitation are shown. Conclusion gives the results achieved with the implementation of DEMS and X-DEG as a high-performance measuring system, from the aspect of set objectives and requirements. X-DEG measuring systems are field-tested and operational functioning at 285 locations of hydrological and meteorological national measuring network. The system was also tested and installed at some air quality measuring sites. Production of installed measuring systems DEMS / X-DEG was performed in SEA laboratory

    Error management in ATLAS TDAQ : an intelligent systems approach

    Get PDF
    This thesis is concerned with the use of intelligent system techniques (IST) within a large distributed software system, specifically the ATLAS TDAQ system which has been developed and is currently in use at the European Laboratory for Particle Physics(CERN). The overall aim is to investigate and evaluate a range of ITS techniques in order to improve the error management system (EMS) currently used within the TDAQ system via error detection and classification. The thesis work will provide a reference for future research and development of such methods in the TDAQ system. The thesis begins by describing the TDAQ system and the existing EMS, with a focus on the underlying expert system approach, in order to identify areas where improvements can be made using IST techniques. It then discusses measures of evaluating error detection and classification techniques and the factors specific to the TDAQ system. Error conditions are then simulated in a controlled manner using an experimental setup and datasets were gathered from two different sources. Analysis and processing of the datasets using statistical and ITS techniques shows that clusters exists in the data corresponding to the different simulated errors. Different ITS techniques are applied to the gathered datasets in order to realise an error detection model. These techniques include Artificial Neural Networks (ANNs), Support Vector Machines (SVMs) and Cartesian Genetic Programming (CGP) and a comparison of the respective advantages and disadvantages is made. The principle conclusions from this work are that IST can be successfully used to detect errors in the ATLAS TDAQ system and thus can provide a tool to improve the overall error management system. It is of particular importance that the IST can be used without having a detailed knowledge of the system, as the ATLAS TDAQ is too complex for a single person to have complete understanding of. The results of this research will benefit researchers developing and evaluating IST techniques in similar large scale distributed systems.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore