56 research outputs found

    Systemunterstützung für moderne Speichertechnologien

    Get PDF
    Trust and scalability are the two significant factors which impede the dissemination of clouds. The possibility of privileged access to customer data by a cloud provider limits the usage of clouds for processing security-sensitive data. Low latency cloud services rely on in-memory computations, and thus, are limited by several characteristics of Dynamic RAM (DRAM) such as capacity, density, energy consumption, for example. Two technological areas address these factors. Mainstream server platforms, such as Intel Software Guard eXtensions (SGX) und AMD Secure Encrypted Virtualisation (SEV) offer extensions for trusted execution in untrusted environments. Various technologies of Non-Volatile RAM (NV-RAM) have better capacity and density compared to DRAM and thus can be considered as DRAM alternatives in the future. However, these technologies and extensions require new programming approaches and system support since they add features to the system architecture: new system components (Intel SGX) and data persistence (NV-RAM). This thesis is devoted to the programming and architectural aspects of persistent and trusted systems. For trusted systems, an in-depth analysis of new architectural extensions was performed. A novel framework named EActors and a database engine named STANlite were developed to effectively use the capabilities of trusted~execution. For persistent systems, an in-depth analysis of prospective memory technologies, their features and the possible impact on system architecture was performed. A new persistence model, called the hypervisor-based model of persistence, was developed and evaluated by the NV-Hypervisor. This offers transparent persistence for legacy and proprietary software, and supports virtualisation of persistent memory.Vertrauenswürdigkeit und Skalierbarkeit sind die beiden maßgeblichen Faktoren, die die Verbreitung von Clouds behindern. Die Möglichkeit privilegierter Zugriffe auf Kundendaten durch einen Cloudanbieter schränkt die Nutzung von Clouds bei der Verarbeitung von sicherheitskritischen und vertraulichen Informationen ein. Clouddienste mit niedriger Latenz erfordern die Durchführungen von Berechnungen im Hauptspeicher und sind daher an Charakteristika von Dynamic RAM (DRAM) wie Kapazität, Dichte, Energieverbrauch und andere Aspekte gebunden. Zwei technologische Bereiche befassen sich mit diesen Faktoren: Etablierte Server Plattformen wie Intel Software Guard eXtensions (SGX) und AMD Secure Encrypted Virtualisation (SEV) stellen Erweiterungen für vertrauenswürdige Ausführung in nicht vertrauenswürdigen Umgebungen bereit. Verschiedene Technologien von nicht flüchtigem Speicher bieten bessere Kapazität und Speicherdichte verglichen mit DRAM, und können daher in Zukunft als Alternative zu DRAM herangezogen werden. Jedoch benötigen diese Technologien und Erweiterungen neuartige Ansätze und Systemunterstützung bei der Programmierung, da diese der Systemarchitektur neue Funktionalität hinzufügen: Systemkomponenten (Intel SGX) und Persistenz (nicht-flüchtiger Speicher). Diese Dissertation widmet sich der Programmierung und den Architekturaspekten von persistenten und vertrauenswürdigen Systemen. Für vertrauenswürdige Systeme wurde eine detaillierte Analyse der neuen Architekturerweiterungen durchgeführt. Außerdem wurden das neuartige EActors Framework und die STANlite Datenbank entwickelt, um die neuen Möglichkeiten von vertrauenswürdiger Ausführung effektiv zu nutzen. Darüber hinaus wurde für persistente Systeme eine detaillierte Analyse zukünftiger Speichertechnologien, deren Merkmale und mögliche Auswirkungen auf die Systemarchitektur durchgeführt. Ferner wurde das neue Hypervisor-basierte Persistenzmodell entwickelt und mittels NV-Hypervisor ausgewertet, welches transparente Persistenz für alte und proprietäre Software, sowie Virtualisierung von persistentem Speicher ermöglicht

    Interacção máquina-a-máquina em computação ubíqua

    Get PDF
    Mestrado em Engenharia de Computadores e TelemáticaAlthough the area of Machine-to-Machine communications and, consequently, the Internet of Things, have undergone a great improvement regarding interoperability, there is still no ’de facto’ solution proposal to achieve large scale, even global, interoperability. As a first step, this work provides a theoretical analysis of proposals relevant to the area, mainly analysing how they achieve some essential requirements for the Internet of Things, such as scalability, heterogeneity and management. Later, focusing in ETSI’s M2M standard, is first given a high-level description of its vision, approach and architecture, and then, finally, from a more practical point of view, is also presented and tested a functional implementation of an ETSI M2M compliant gateway, which provides an empirical evaluation of the standard.Apesar de a área das comunicações Máquina-a-Máquina e, consequentemente, a Internet das Coisas, terem sofrido uma grande melhoria relativamente à interoperabilidade, ainda não existe nenhuma solução considerada "dominante" que permita atingir uma interoperabilidade em larga escala, até mesmo global. Desta forma, numa primeira instância este trabalho visa fornecer uma análise teórica de propostas relevantes para a área, onde se analisa maioritariamente como é que essas propostas atingem alguns requisitos essenciais para a Internet das Coisas, como a escalabilidade, heterogeneidade e gestão. Posteriormente, focando-se no standard ETSI M2M, é dado em primeiro lugar uma descrição de alto nível da sua visão, abordagem e arquitectura, e depois, finalmente, de um ponto de vista prático, é ainda apresentada e testada uma implementação funcional de uma gateway condescendente com o standard, o que fornece uma avaliação mais empírica do mesmo

    Infrastructure for digital medicine and AI enhanced pathology

    Get PDF
    Modern cancer research relies on a vast array of different technologies and data sources. Many of them use next-generation sequencing (NGS) techniques, with diverse application domains such as genomics, transcriptomics, epigenomics and metagenomics. These processes create data on the molecular level, and enables researchers to analyze genetic mutations, gene expression patterns, genome methylation, protein-DNA interactions and much more. With falling costs, research projects are including more and more NGS experiments, creating massive amounts of data that requires secure storage, processing and analysis. The advent and impressive progress of powerful AI models in recent years has created another important source of data from imaging. Pathologists analyze histopathological images to identify cellular abnormalities, tissue structures and patterns indicative of certain cancer subtypes. The pathologist’s diagnosis will often decide the cancer therapy used, and getting it correct is a matter of life and death. The amount of data, the complexity of it and the regulatory requirements render data-management a central task of utmost importance for the success of research projects. This thesis focuses on two main loosely connected fields of interest, NGS sequencing and AI pathology. We first describe the infrastructure that was created to support work in both those fields, with the main result being a high- availability kubernetes cluster. It gives us a common platform for containerized services and pipelines. For NGS sequencing, we created software that applies strong cryptography to all the sequencing data we receive. This is required for human genetic data, but we apply encryption to all sequencing data by default. Data is only stored encrypted by a symmetric key algorithm, and the symmetric keys are then secured using public-key cryptography. This hybrid cryptosystem has very strong security benefits, data access requires both access to the data and holding the proper keys. Even if the servers were to be compromised the data would stay secure. Multiple potential risks were considered and mitigated. We have documented and implemented various recovery options, e.g. in case of key-loss by users. Key innovation for this software is the use of the Web Crypto API in the user’s browser for all the cryptography. Users do not need to install software, and the cryptographic algorithms are built into the browser, where they are continuously audited, and browser vendors ensure their security. While our design does not require users to actively deal with the cryptography, they must also handle the data properly and secure their keys. We implement local crypto key generation, and we can save them to printed QR Codes where they closely map to the mental model for physical keys. Connected to the NGS data, we centrally collect the relevant metadata in a web-tool. The interface for users is similar to a simple spreadsheet, familiar to most users. But compared to simple Excel spreadsheet, through the central infrastructure, metadata is kept in sync for the whole data life-cycle. We designed metadata templates based on existing metadata standard, e.g. from the European Genome-Phenome Archive (EGA). Metadata standards are enforced in a user-friendly way by providing drop-downs and immediate user feedback on data validation. Data is stored as a full timeline of events, so even if data is accidentally overwritten it can be recovered. By having both the metadata and the sequencing data accessible through an API, pipelines can read and write the data automatically. For AI pathology we first describe the shared infrastructure we created for our AI projects. This includes data ingestion, converting pathology slides to a common format, viewing the slides, and enabling fast random access for deep learning. We then present our work on the AI diagnosis of lymphoma on a dataset of 628 whole slides images from 157 patients. We use transfer learning to train deep neural networks on a large amount of patches, small pieces of the slide. These trained networks are then used on test data to generate diagnosis maps, locally annotated slide images that show the AI diagnosis for different regions of the slides. These maps can be evaluated by pathologists, and we can combine all the local diagnoses into a patient diagnosis. For the initial dataset of 157 patients we achieved great performance with 60% of patches classified correctly, and were able to correctly diagnose all patients. Unfortunately this performance did not transfer to an independent dataset, and more work is required to create a model that generalizes well

    An Agile Roadmap for Live, Virtual and Constructive-Integrating Training Architecture (LVC-ITA): A Case Study Using a Component based Integrated Simulation Engine

    Get PDF
    Conducting seamless Live Virtual Constructive (LVC) simulation remains the most challenging issue of Modeling and Simulation (M&S). There is a lack of interoperability, limited reuse and loose integration between the Live, Virtual and/or Constructive assets across multiple Standard Simulation Architectures (SSAs). There have been various theoretical research endeavors about solving these problems but their solutions resulted in complex and inflexible integration, long user-usage time and high cost for LVC simulation. The goal of this research is to provide an Agile Roadmap for the Live Virtual Constructive-Integrating Training Architecture (LVC-ITA) that will address the above problems and introduce interoperable LVC simulation. Therefore, this research describes how the newest M&S technologies can be utilized for LVC simulation interoperability and integration. Then, we will examine the optimal procedure to develop an agile roadmap for the LVC-ITA. In addition, this research illustrated a case study using an Adaptive distributed parallel Simulation environment for Interoperable and reusable Model (AddSIM) that is a component based integrated simulation engine. The agile roadmap of the LVC-ITA that reflects the lessons learned from the case study will contribute to guide M&S communities to an efficient path to increase interaction of M&S simulation across systems

    Design of a reference architecture for an IoT sensor network

    Get PDF

    Generic Data Acquisition and Instrument control System (GDAIS)

    Get PDF
    Premi GMV en l’àmbit de la Tecnologia Espacial al millor Projecte de Fi de Carrera d’Enginyeria de Telecomunicació (curs 2010-2011)English: Remote sensing instrument development usually includes a software interface to control the instrument and acquire data. Although there are a similarities among softwares, it is hardly ever reused, since it is not designed with reusability in mind. The goal of this project is to develop a multi-platform software system to control and acquire data from one or more instruments in a generic and adaptable way. Thus, in future instruments, it can be used directly or with some minor modifications. The main feature of this system, named Generic Data Acquisition and Instrument control System (GDAIS), consists in adapting to a wide variety of instruments with a simple configuration text file for each one. Furthermore, controlling multiple instruments in parallel and co-register their acquired data, having remote access to the data and being able to monitor the system status are key points of the design. To satisfy these system requirements, a modular architecture design has been developed. The system is divided in small parts, each responsible of an specific functionality. The main module, named GDAIS-core, communicates independently with each connected instrument and saves the received data. Acquired data is saved in the Hierarchical Data Format (HDF5) binary format, designed specially for remote sensing scientific data, which is directly compatible with the commonly used network Common Data Form v4 (netCDF-4) format. The other main module of the system is named GDAIS-control. Its job is to control and monitor GDAIS-core. In order to make this module accessible from anywhere, its user interface is implemented as a web page. Apart from these two main modules, two desktop applications are provided to help with the configuration of the system. The first one is used to create an instrument text descriptor, which defines its interaction, connection and parser. The second one is used to define text descriptor of a set of instruments that the system will be controlling. Due to its modular design, the system is very flexible and it allows to significantly change the implementation of some subsystem without requiring any modification to the other parts. It can be used in a wide range of applications, from controlling a single instrument to acquiring data from a network of several complex instruments and saving them all together. Furthermore, it can be operated as a file data converter, reading from a raw capture or text file and parsing it to store it in the more optimized and well-organized HDF5 format.Castellano: El desarrollo de cualquier nuevo instrumento de teledetección suele incluir una interfaz software para controlar el instrumento y adquirir datos. Aunque esta parte software es muy similar cada vez, no suele ser reutilitzada ya que no se diseña teniendo en cuenta esta idea. El objetivo de este proyecto es desarrollar un sistema software multi-plataforma para controlar y adquirir datos de uno o más instrumentos de forma genérica y adaptable, para así poder ser usado directamente o con alguna ligera modificación. La principal característica de este sistema, llamado Sistema Genérico de Adquisición de Datos y Control de Instrumentos, consiste en la capacidad de adaptarse a diferentes tipos de instrumentos con sólo un fichero de configuración para cada uno. Además, otros elementos importantes del diseño incluyen la posibilidad de controlar múltiples instrumentos en paralelo, guardando a la vez la información recibida de cada uno; permitir el acceso remoto a los datos capturados; y proporcionar una interfaz de monitorización del estado del sistema. Para que el sistema cumpla con todos estos requisitos, se ha diseñado una arquitectura modular. El sistema está dividido en múltiples bloques, cada uno responsable de una funcionalidad específica. El bloque principal, llamado GDAIS-core, se comunica independientemente con cada instrumento conectado y guarda los datos que recibe. Estos datos son guardados en el formato binario HDF5, diseñado especialmente para datos científicos de teledetección, y que es directamente compatible con otro formato muy usado, el netCDF-4. El otro bloque principal del sistema se llama GDAIS-control. Este se encarga de controlar y monitorizar el bloque GDAIS-core. Para facilitar el acceso a esta interfaz de control desde cualquier sitio, ha sido implementado como una aplicación web. Además de estos dos módulos principales, también se han creado dos aplicaciones gráficas de escritorio para ayudar con la configuración del sistema. La primera permite crear un fichero de texto con la descripción del instrumento y la segunda sirve para crear un fichero con la descripción de una combinación de instrumentos a controlar conjuntamente. Gracias a su diseño modular, el sistema es muy flexible y permite modificaciones importantes a cualquiera de sus sistemas sin tener que cambiar nada de las otras partes. Hay muchas aplicaciones posibles para este sistema, desde controlar un solo instrumento hasta adquirir datos de una red de instrumentos y guardarlo todo en un solo fichero. También se puede usar como conversor de ficheros, partiendo de un fichero de texto o binario de una captura, para obtener la misma información en un fichero con el formato HDF5, más optimizado y organizado.Català: El desenvolupament de qualsevol instrument de teledetecció acostuma a incloure una interfície software per controlar l'instrument i adquirir dades. Tot i que aquesta part software sol ser molt similar cada cop, no acostuma a ser reutilitzada, ja que no es dissenya tenint-ho en compte. L'objectiu d'aquest projecte és desenvolupar un sistema software multi-plataforma per controlar i adquirir dades d'un o més instruments de forma genèrica i adaptable, de manera que pugui ser utilitzat directament o amb alguna lleugera modificació. La principal característica del sistema, anomenat Sistema Genèric d'Adquisició de Dades i Control d'Instruments, consisteix en la capacitat d'adaptar-se a molts tipus diferents d'instruments amb un simple fitxer de configuració per a cada un. A més a més, altres punts importants del disseny són la possibilitat de controlar múltiples instruments en paral·lel, desant alhora la informació rebuda de cada un; permetre l'accés remot a les dades capturades; i proporcionar una interfície de monitorització de l'estat del sistema. Per tal que el sistema compleixi amb tots aquests requeriments, s'ha dissenyat una arquitectura modular. El sistema està dividit en diversos blocs, cada un responsable d'una funcionalitat específica. El bloc principal, anomenat GDAIS-core, es comunica independentment amb cada instrument connectat i guarda les dades que rep. Les dades adquirides són desades en el format binari HDF5, dissenyat especialment per a dades científiques de teledetecció, i que és directament compatible amb un altre format molt utilitzat, el netCDF-4. L'altre bloc principal del sistema es diu GDAIS-control. Aquest s'encarrega de controlar i monitoritzar el bloc GDAIS-core. Per tal de fer accessible aquesta interfície des de qualsevol lloc, s'ha implementat com una aplicació web. A més d'aquests dos mòduls principals, també s'han creat dues aplicacions gràfiques d'escriptori per ajudar amb la configuració del sistema. La primera permet crear un fitxer de text amb la descripció d'un instrument i la segona serveix per crear un fitxer amb la descripció d'una combinació d'instruments a controlar conjuntament. Gràcies al seu disseny modular, el sistema és molt flexible i permet fer modificacions importants a un dels subsistemes sense haver de fer cap canvi a les altres parts. Hi ha moltes aplicacions possibles per aquest sistema, des de controlar un sol instrument a adquirir dades d'una xarxa d'instruments i guardar-ho tot en un sol fitxer. També es pot utilitzar com un convertidor de fitxers, partint d'un fitxer de text o binari d'una captura, per obtenir la mateixa informació en un fitxer en el format HDF5, més optimitzat i ben organitzat.Award-winnin

    Human-centred computer architecture: redesigning the mobile datastore and sharing interface

    Get PDF
    This dissertation develops a material perspective on Information & Communication Technologies and combines this perspective with a Research through Design approach to interrogate current and develop new mobile sharing interfaces and datastores. through this approach I open up a line of inquiry that connects a material perspective of information with everyday sharing and communication practices as well as with the mobile and cloud architectures that increasingly mediate such practices. With this perspective, I uncover a shifting emphasis of how data is stored on mobile devices and how this data is made available to apps through sharing interfaces that prevent apps from obtaining a proper handle of data to support fundamentally human acts of sharing such as giving. I take these insights to articulate a much wider research agenda to implicate, beyond the sharing interface, the app model and mobile datastore, data exchange protocols, and the Cloud. I formalise the approach I take to bring technically and socially complex, multi-dimensional and changing ideas into correspondence and to openly document this process. I consider the history of the File abstraction and the fundamental grammars of action this abstraction supports (e.g. move, copy, & delete) and the mediating role this abstraction – and its graphical representation – plays in binding together the concerns of system architects, programmers, and users. Finding inspiration in the 30 year history of the file, I look beyond the Desktop to contemporary realms of computing on the mobile and in the Cloud to develop implications for reinvigorated file abstractions, representations, and grammars of actions. First and foremost, these need to have a social perspective on files. To develop and hone such a social perspective, and challenge the assumption that mobile phones are telephones – implying interaction at a distance – I give an interwoven account of the theoretical and practical work I undertook to derive and design a grammar of action – showing – tailored to co-present and co-located interactions. By documenting the process of developing prototypes that explore this design space, and returning to the material perspective I developed earlier, I explore how the grammars of show and gift are incongruent with the specific ways in which information is passed through the mobile’s sharing interface. This insight led me to prototype a mobile datastore – My Stuff – and design new file abstractions that foreground the social nature of the stuff we store and share on our mobiles. I study how that stuff is handled and shared in the Cloud by developing, documenting, and interrogating a cloud service to facilitate sharing, and implement grammars of actions to support and better align with human communication and sharing acts. I conclude with an outlook on the powerful generative metaphor of casting mobile media files as digital possessions to support and develop human-centred computer architecture that give people better awareness and control over the stuff that matters to them

    The EMU-SDMS

    Get PDF

    Microservices based architecture and mobile application to suport crew and vessel inspections

    Get PDF
    Tese de mestrado, Engenharia Informática, 2023, Universidade de Lisboa, Faculdade de CiênciasWith the ever increasing importance of the maritime services around the world, the need to control and monitor ports and vessels is born, thus allowing to increase/improve the level of productivity, reliability, safety and security in this field. When it comes to safety and security, vessel monitoring is one of the most important parts that enables the respective authorities to verify and validate the vessels, their crews, and their missions through vessel inspections. These vessel inspection missions, as they can be carried out in various areas of the coastal zone, are subject to limitations that are not encountered in normal situations, such as adverse weather conditions or lack of connection to the network and therefore to the servers that support these types of inspections and store the relevant information. Another limitation that arises from this lack of connection, is the secure authentication of the inspectors and maintaining the access to the information. Also due to the increase in the number of vessels, there may be scalability problems with the backend systems. To help solve these problems, a backend architecture based on microservices and a mobile application were developed to support the inspectors by providing all the information, in a secure way, that is needed to perform the inspections, whether the inspector is in areas that have, or not, access to the network (online or offline). The developed architecture consists of several independent microservices, deployed through a Kubernetes cluster, and that supports the mobile application used by the inspectors, allowing the inspectors to store and have access to the inspection information about the vessels, crews, vessel licenses and predictions about possible future inspection targets, for a limited period of time after the beginning of the inspection, thus improving security
    • …
    corecore