9 research outputs found

    Applying Hypervisor-Based Fault Tolerance Techniques to Safety-Critical Embedded Systems

    Get PDF
    This document details the work conducted through the development of this thesis, and it is structured as follows: • Chapter 1, Introduction, has briefly presented the motivation, objectives, and contributions of this thesis. • Chapter 2, Fundamentals, exposes a series of concepts that are necessary to correctly understand the information presented in the rest of the thesis, such as the concepts of virtualization, hypervisors, or software-based fault tolerance. In addition, this chapter includes an exhaustive review and comparison between the different hypervisors used in scientific studies dealing with safety-critical systems, and a brief review of some works that try to improve fault tolerance in the hypervisor itself, an area of research that is outside the scope of this work, but that complements the mechanism presented and could be established as a line of future work. • Chapter 3, Problem Statement and Related Work, explains the main reasons why the concept of Hypervisor-Based Fault Tolerance was born and reviews the main articles and research papers on the subject. This review includes both papers related to safety-critical embedded systems (such as the research carried out in this thesis) and papers related to cloud servers and cluster computing that, although not directly applicable to embedded systems, may raise useful concepts that make our solution more complete or allow us to establish future lines of work. • Chapter 4, Proposed Solution, begins with a brief comparison of the work presented in Chapter 3 to establish the requirements that our solution must meet in order to be as complete and innovative as possible. It then sets out the architecture of the proposed solution and explains in detail the two main elements of the solution: the Voter and the Health Monitoring partition. • Chapter 5, Prototype, explains in detail the prototyping of the proposed solution, including the choice of the hypervisor, the processing board, and the critical functionality to be redundant. With respect to the voter, it includes prototypes for both the software version (the voter is implemented in a virtual machine) and the hardware version (the voter is implemented as IP cores on the FPGA). • Chapter 6, Evaluation, includes the evaluation of the prototype developed in Chapter 5. As a preliminary step and given that there is no evidence in this regard, an exercise is carried out to measure the overhead involved in using the XtratuM hypervisor versus not using it. Subsequently, qualitative tests are carried out to check that Health Monitoring is working as expected and a fault injection campaign is carried out to check the error detection and correction rate of our solution. Finally, a comparison is made between the performance of the hardware and software versions of Voter. • Chapter 7, Conclusions and Future Work, is dedicated to collect the conclusions obtained and the contributions made during the research (in the form of articles in journals, conferences and contributions to projects and proposals in the industry). In addition, it establishes some lines of future work that could complete and extend the research carried out during this doctoral thesis.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: Katzalin Olcoz Herrero.- Secretario: Félix García Carballeira.- Vocal: Santiago Rodríguez de la Fuent

    Embedded-systems-oriented virtualization framework with functionality farming

    Get PDF
    Tese de Doutoramento em Engenharia Eletrónica e de ComputadoresUm: O uso de um hipervisor como kernel de separação em arquiteturas integradas está a ser considerado, visto que, um hipervisor não só proporciona separação temporal e espacial, mas também compatibilidade com software legacy. No entanto, nos dias de hoje, a maior parte dos hipervisores baseiam-se em paravirtualização ou dependem de hardware high-end; ambas as abordagens não cumprem os requisitos dos sistema embebidos críticos para a segurança. A paravirtualização, por um lado, não proporciona compatibilidade total com software legacy, sendo necessária a sua modificação e adaptação a uma interface especifica do hipervisor utilizado. Hardware high-end, por outro lado, apesar de proporcionar compatibilidade total com software legacy, dá origem a sistemas de grande dimensão, de elevado peso, com elevado consumo de energia, de elevado custo, etc. Nesta tese, a capacidade da virtualização completa em hardware lowend para resolver as limitações dos hipervisores existentes é investigada. Para isso, um hipervisor baseado em virtualização completa em hardware low-end é descrito e é apresentada uma avaliação da sua performance e do espaço ocupado em memória. Dois: Métodos de desenvolvimentos convencionais não são capazes de acompanhar os requisitos dos sistemas embebidos críticos para segurança de hoje em dia. Nesta tese: (a) é apresentada uma abordagem baseada em modelos já existente, mais especificamente, geração de código baseada em modelos; (b) são descritas as modificações aplicadas a um compilador de modelos já existente por forma a que este suporte novas capacidades; e (c) é apresentada uma avaliação sobre a capacidade da geração de código baseada em modelos de reduzir o esforço de engenharia quando comparada com abordagens convencionais. Três: A maior parte dos sistemas operativos de hoje em dia seguem uma arquitetura monolítica; esta arquitetura, no entanto, está associada a fraca confiabilidade, baixa segurança, esforço de certificação elevado, bem como baixa previsibilidade e escalabilidade. Para colmatar estes problemas, as soluções propostas na literatura apenas contornam a origem do problema, i.e., a elevada dimensão do kernel numa arquitetura monolítica, e não o resolvem diretamente. Nesta tese, functionality farming é proposto para atacar a origem do problema. Functionality farming apenas, no entanto, depende de um esforço de engenharia significativo. Visto isto, esta tese também apresenta FF-AUTO, uma ferramenta capaz de realizar functionality farming de forma semiautomática. Por último, esta tese demonstra como functionality farming é capaz de melhorar o design e a performance de um kernel já existente, e demonstra também como FF-AUTO permite uma redução significativa do esforço de engenharia.First, the use of a hypervisor as the separation kernel on integrated architectures has been considered, as it not only provides time and space partitioning, but it also provides compatibility with legacy software. Nowadays, most hypervisors, however, either rely on paravirtualization or depend on high-end hardware, both of which do not fulfill the requirements of safety-critical embedded systems. Paravirtualization does not provide complete legacy compatibility as it requires legacy software to be modified to fit a hypervisor-specific interface. High-end hardware, on the other hand, even though it provides complete legacy compatibility, it leads to large system size, weight, power consumption, cost, etc. In this thesis, the feasibility of low-end hardware full virtualization to address the limitations of existing hypervisors is investigated. For that, a hypervisor based on low-end hardware full virtualization is described and an evaluation of its performance and footprint is presented. Second, conventional development methods are unable to keep up with the requirements of nowadays and future safety-critical embedded systems. In this thesis: (a) an existing model-driven engineering approach to address the limitations of conventional development methods is presented; more specifically, a model-driven code generation approach; (b) the modifications applied to an existing model compiler in order for it to support new features are described; and (c) an evaluation of whether or not a model-driven code generation approach leads to lower engineering effort when compared to a conventional approach is presented. Third, most operating systems, nowadays, follow a monolithic architecture; this, however, leads to poor reliability, weak security, high certification effort, as well as poor predictability and scalability. To address this problem, the solutions proposed in the literature just work around the source of the problem, i.e., the large size of the kernel in a monolithic architecture, and do not address it directly. In this thesis, functionality farming is proposed to tackle the source of the problem. Functionality farming alone, however, depends on a significant engineering effort. To address this problem, this thesis also presents FF-AUTO, a tool which performs functionality farming semi-automatically. At last, this thesis demonstrates how functionality farming is able to improve the design and the performance of an existing kernel, as well as how FF-AUTO enables a significant reduction of the required engineering effort

    Integration of LXD System Containers with OpenStack, CHEF and its Application on a 3-Tier IoT Architecture

    Get PDF
    Internet of Things has moved from being a 2-tier server-client into a 3-tier server-gateway-client architecture. The gateway plays a vital role in this 3-tier architecture with intelligence being built into it. With no proper standardization and with more vendors having proprietary apps, which are shared in this multi-tenant gateway, it demands sandboxing and isolation of apps at the gateway. My thesis explores light weight LXD System containers and state of the art configuration management tools like Chef, to build an architecture, leveraging Infrastructure as a Code, creating an app delivery pipeline to deploy apps in jailed environments at an IoT Gateway while maintaining a minimal overhead. The framework also provides ways to automate tests for deployment validation

    Leveraging Kubernetes in Edge-Native Cable Access Convergence

    Get PDF
    Public clouds provide infrastructure services and deployment frameworks for modern cloud-native applications. As the cloud-native paradigm has matured, containerization, orchestration and Kubernetes have become its fundamental building blocks. For the next step of cloud-native, an interest to extend it to the edge computing is emerging. Primary reasons for this are low-latency use cases and the desire to have uniformity in cloud-edge continuum. Cable access networks as specialized type of edge networks are not exception here. As the cable industry transitions to distributed architectures and plans the next steps to virtualize its on-premise network functions, there are opportunities to achieve synergy advantages from convergence of access technologies and services. Distributed cable networks deploy resource-constrained devices like RPDs and RMDs deep in the edge networks. These devices can be redesigned to support more than one access technology and to provide computing services for other edge tenants with MEC-like architectures. Both of these cases benefit from virtualization. It is here where cable access convergence and cloud-native transition to edge-native intersect. However, adapting cloud-native in the edge presents a challenge, since cloud-native container runtimes and native Kubernetes are not optimal solutions in diverse edge environments. Therefore, this thesis takes as its goal to describe current landscape of lightweight cloud-native runtimes and tools targeting the edge. While edge-native as a concept is taking its first steps, tools like KubeEdge, K3s and Virtual Kubelet can be seen as the most mature reference projects for edge-compatible solution types. Furthermore, as the container runtimes are not yet fully edge-ready, WebAssembly seems like a promising alternative runtime for lightweight, portable and secure Kubernetes compatible workloads

    Secure and safe virtualization-based framework for embedded systems development

    Get PDF
    Tese de Doutoramento - Programa Doutoral em Engenharia Electrónica e de Computadores (PDEEC)The Internet of Things (IoT) is here. Billions of smart, connected devices are proliferating at rapid pace in our key infrastructures, generating, processing and exchanging vast amounts of security-critical and privacy-sensitive data. This strong connectivity of IoT environments demands for a holistic, end-to-end security approach, addressing security and privacy risks across different abstraction levels: device, communications, cloud, and lifecycle managment. Security at the device level is being misconstrued as the addition of features in a late stage of the system development. Several software-based approaches such as microkernels, and virtualization have been used, but it is proven, per se, they fail in providing the desired security level. As a step towards the correct operation of these devices, it is imperative to extend them with new security-oriented technologies which guarantee security from the outset. This thesis aims to conceive and design a novel security and safety architecture for virtualized systems by 1) evaluating which technologies are key enablers for scalable and secure virtualization, 2) designing and implementing a fully-featured virtualization environment providing hardware isolation 3) investigating which "hard entities" can extend virtualization to guarantee the security requirements dictated by confidentiality, integrity, and availability, and 4) simplifying system configurability and integration through a design ecosystem supported by a domain-specific language. The developed artefacts demonstrate: 1) why ARM TrustZone is nowadays a reference technology for security, 2) how TrustZone can be adequately exploited for virtualization in different use-cases, 3) why the secure boot process, trusted execution environment and other hardware trust anchors are essential to establish and guarantee a complete root and chain of trust, and 4) how a domain-specific language enables easy design, integration and customization of a secure virtualized system assisted by the above mentioned building blocks.Vivemos na era da Internet das Coisas (IoT). Biliões de dispositivos inteligentes começam a proliferar nas nossas infraestruturas chave, levando ao processamento de avolumadas quantidades de dados privados e sensíveis. Esta forte conectividade inerente ao conceito IoT necessita de uma abordagem holística, em que os riscos de privacidade e segurança são abordados nas diferentes camadas de abstração: dispositivo, comunicações, nuvem e ciclo de vida. A segurança ao nível dos dispositivos tem sido erradamente assegurada pela inclusão de funcionalidades numa fase tardia do desenvolvimento. Têm sido utilizadas diversas abordagens de software, incluindo a virtualização, mas está provado que estas não conseguem garantir o nível de segurança desejado. De forma a garantir a correta operação dos dispositivos, é fundamental complementar os mesmos com novas tecnologias que promovem a segurança desde os primeiros estágios de desenvolvimento. Esta tese propõe, assim, o desenvolvimento de uma solução arquitetural inovadora para sistemas virtualizados seguros, contemplando 1) a avaliação de tecnologias chave que promovam tal realização, 2) a implementação de uma solução de virtualização garantindo isolamento por hardware, 3) a identificação de componentes que integrados permitirão complementar a virtualização para garantir os requisitos de segurança, e 4) a simplificação do processo de configuração e integração da solução através de um ecossistema suportado por uma linguagem de domínio específico. Os artefactos desenvolvidos demonstram: 1) o porquê da tecnologia ARM TrustZone ser uma tecnologia de referência para a segurança, 2) a efetividade desta tecnologia quando utilizada em diferentes domínios, 3) o porquê do processo seguro de inicialização, juntamente com um ambiente de execução seguro e outros componentes de hardware, serem essenciais para estabelecer uma cadeia de confiança, e 4) a viabilidade em utilizar uma linguagem de um domínio específico para configurar e integrar um ambiente virtualizado suportado pelos artefactos supramencionados

    Building Computing-As-A-Service Mobile Cloud System

    Get PDF
    The last five years have witnessed the proliferation of smart mobile devices, the explosion of various mobile applications and the rapid adoption of cloud computing in business, governmental and educational IT deployment. There is also a growing trends of combining mobile computing and cloud computing as a new popular computing paradigm nowadays. This thesis envisions the future of mobile computing which is primarily affected by following three trends: First, servers in cloud equipped with high speed multi-core technology have been the main stream today. Meanwhile, ARM processor powered servers is growingly became popular recently and the virtualization on ARM systems is also gaining wide ranges of attentions recently. Second, high-speed internet has been pervasive and highly available. Mobile devices are able to connect to cloud anytime and anywhere. Third, cloud computing is reshaping the way of using computing resources. The classic pay/scale-as-you-go model allows hardware resources to be optimally allocated and well-managed. These three trends lend credence to a new mobile computing model with the combination of resource-rich cloud and less powerful mobile devices. In this model, mobile devices run the core virtualization hypervisor with virtualized phone instances, allowing for pervasive access to more powerful, highly-available virtual phone clones in the cloud. The centralized cloud, powered by rich computing and memory recourses, hosts virtual phone clones and repeatedly synchronize the data changes with virtual phone instances running on mobile devices. Users can flexibly isolate different computing environments. In this dissertation, we explored the opportunity of leveraging cloud resources for mobile computing for the purpose of energy saving, performance augmentation as well as secure computing enviroment isolation. We proposed a framework that allows mo- bile users to seamlessly leverage cloud to augment the computing capability of mobile devices and also makes it simpler for application developers to run their smartphone applications in the cloud without tedious application partitioning. This framework was built with virtualization on both server side and mobile devices. It has three building blocks including agile virtual machine deployment, efficient virtual resource management, and seamless mobile augmentation. We presented the design, imple- mentation and evaluation of these three components and demonstrated the feasibility of the proposed mobile cloud model

    Intelligent business processes composition based on mas, semantic and cloud integration (IPCASCI)

    Get PDF
    [EN]Component reuse is one of the techniques that most clearly contributes to the evolution of the software industry by providing efficient mechanisms to create quality software. Reuse increases both software reliability, due to the fact that it uses previously tested software components, and development productivity, and leads to a clear reduction in cost. Web services have become are an standard for application development on cloud computing environments and are essential in business process development. These services facilitate a software construction that is relatively fast and efficient, two aspects which can be improved by defining suitable models of reuse. This research work is intended to define a model which contains the construction requirements of new services from service composition. To this end, the composition is based on tested Web services and artificial intelligent tools at our disposal. It is believed that a multi-agent architecture based on virtual organizations is a suitable tool to facilitate the construction of cloud computing environments for business processes from other existing environments, and with help from ontological models as well as tools providing the standard BPEL (Business Process Execution Language). In the context of this proposal, we must generate a new business process from the available services in the platform, starting with the requirement specifications that the process should meet. These specifications will be composed of a semi-free description of requirements to describe the new service. The virtual organizations based on a multi-agent system will manage the tasks requiring intelligent behaviour. This system will analyse the input (textual description of the proposal) in order to deconstruct it into computable functionalities, which will be subsequently treated. Web services (or business processes) stored to be reused have been created from the perspective of SOA architectures and associated with an ontological component, which allows the multi-agent system (based on virtual organizations) to identify the services to complete the reuse process. The proposed model develops a service composition by applying a standard BPEL once the services that will compose the solution business process have been identified. This standard allows us to compose Web services in an easy way and provides the advantage of a direct mapping from Business Process Management Notation diagrams

    Autonomic Performance-Aware Resource Management in Dynamic IT Service Infrastructures

    Get PDF
    Model-based techniques are a powerful approach to engineering autonomic and self-adaptive systems. This thesis presents a model-based approach for proactive and autonomic performance-aware resource management in dynamic IT infrastructures. Core of the approach is an architecture-level modeling language to describe performance and resource management related aspects in such environments. With this approach, it is possible to autonomically find suitable system configurations at the model level

    A Cognitive Routing framework for Self-Organised Knowledge Defined Networks

    Get PDF
    This study investigates the applicability of machine learning methods to the routing protocols for achieving rapid convergence in self-organized knowledge-defined networks. The research explores the constituents of the Self-Organized Networking (SON) paradigm for 5G and beyond, aiming to design a routing protocol that complies with the SON requirements. Further, it also exploits a contemporary discipline called Knowledge-Defined Networking (KDN) to extend the routing capability by calculating the “Most Reliable” path than the shortest one. The research identifies the potential key areas and possible techniques to meet the objectives by surveying the state-of-the-art of the relevant fields, such as QoS aware routing, Hybrid SDN architectures, intelligent routing models, and service migration techniques. The design phase focuses primarily on the mathematical modelling of the routing problem and approaches the solution by optimizing at the structural level. The work contributes Stochastic Temporal Edge Normalization (STEN) technique which fuses link and node utilization for cost calculation; MRoute, a hybrid routing algorithm for SDN that leverages STEN to provide constant-time convergence; Most Reliable Route First (MRRF) that uses a Recurrent Neural Network (RNN) to approximate route-reliability as the metric of MRRF. Additionally, the research outcomes include a cross-platform SDN Integration framework (SDN-SIM) and a secure migration technique for containerized services in a Multi-access Edge Computing environment using Distributed Ledger Technology. The research work now eyes the development of 6G standards and its compliance with Industry-5.0 for enhancing the abilities of the present outcomes in the light of Deep Reinforcement Learning and Quantum Computing
    corecore