1,204 research outputs found

    Evaluating Latency in Multiprocessing Embedded Systems for the Smart Grid

    Get PDF
    Smart grid endpoints need to use two environments within a processing system (PS), one with a Linux-type operating system (OS) using the Arm Cortex-A53 cores for management tasks, and the other with a standalone execution or a real-time OS using the Arm Cortex-R5 cores. The Xen hypervisor and the OpenAMP framework allow this, but they may introduce a delay in the system, and some messages in the smart grid need a latency lower than 3 ms. In this paper, the Linux thread latencies are characterized by the Cyclictest tool. It is shown that when Xen hypervisor is used, this scenario is not suitable for the smart grid as it does not meet the 3 ms timing constraint. Then, standalone execution as the real-time part is evaluated, measuring the delay to handle an interrupt created in programmable logic (PL). The standalone application was run in A53 and R5 cores, with Xen hypervisor and OpenAMP framework. These scenarios all met the 3 ms constraint. The main contribution of the present work is the detailed characterization of each real-time execution, in order to facilitate selecting the most suitable one for each application.This work has been supported by the Ministerio de Economía y Competitividad of Spain within the project TEC2017-84011-R and FEDER funds as well as by the Department of Education of the Basque Government within the fund for research groups of the Basque university system IT978-16. It has also been supported by the Basque Government within the project HAZITEK ZE-2020/00022 as well as the Ministerio de Ciencia e Innovación of Spain through the Centro para el Desarrollo Tecnológico Industrial (CDTI) within the project IDI-20201264; in both cases, they have been financed through the Fondo Europeo de Desarrollo Regional 2014-2020 (FEDER funds). It has also been supported by the University of the Basque Country within the scholarship for training of research staff with code PIF20/135

    Keeping Authorities "Honest or Bust" with Decentralized Witness Cosigning

    Get PDF
    The secret keys of critical network authorities - such as time, name, certificate, and software update services - represent high-value targets for hackers, criminals, and spy agencies wishing to use these keys secretly to compromise other hosts. To protect authorities and their clients proactively from undetected exploits and misuse, we introduce CoSi, a scalable witness cosigning protocol ensuring that every authoritative statement is validated and publicly logged by a diverse group of witnesses before any client will accept it. A statement S collectively signed by W witnesses assures clients that S has been seen, and not immediately found erroneous, by those W observers. Even if S is compromised in a fashion not readily detectable by the witnesses, CoSi still guarantees S's exposure to public scrutiny, forcing secrecy-minded attackers to risk that the compromise will soon be detected by one of the W witnesses. Because clients can verify collective signatures efficiently without communication, CoSi protects clients' privacy, and offers the first transparency mechanism effective against persistent man-in-the-middle attackers who control a victim's Internet access, the authority's secret key, and several witnesses' secret keys. CoSi builds on existing cryptographic multisignature methods, scaling them to support thousands of witnesses via signature aggregation over efficient communication trees. A working prototype demonstrates CoSi in the context of timestamping and logging authorities, enabling groups of over 8,000 distributed witnesses to cosign authoritative statements in under two seconds.Comment: 20 pages, 7 figure

    Distributed trustworthy sensor data management architecture

    Get PDF
    Abstract. Growth in Internet of Things (IoT) market has led to larger data volumes generated by massive amount of smart sensors and devices. This data flow must be managed and stored by some data management service. Storing data to the cloud results high latency and need to transfer large amount of data over the Internet. Edge computing operates physically closer to the user than cloud, offering lower latency and reducing data transmission over the network. Going one step forward and storing data locally to the IoT device results smaller latency than cloud and edge computing. Utilizing isolation technique like virtualization enables easy to deploy environment to setup the needed software functionalities. Container technology works well on lightweight hardware as it offers good performance and small overhead. Containers are used to manage server-side services and to give clean environment for each test run. In this thesis two data management platforms, Apache Kafka and MySQL-based MariaDB are tested on a IoT platform. Key performance parameters considered for these platforms are latency and data throughput while also collecting system resource usage data. Variable amount of users and payload sizes are tested and results are presented in graphs. Kafka performed similarly to the SQL-based solution with small differences.Hajautettu luotettava anturidatan hallintajärjestelmä. Tiivistelmä. IoT-markkinoiden kasvu on johtanut suurempien datamäärien luontiin IoT-laitteiden toimesta. Tuo datavirta täytyy hallita and varastoida datan käsittelypalvelun toimesta. Datan tallennus pilvipalveluihin tuottaa suuren latenssin ja tarpeen suurien datamäärien siirrolle Internetin yli. Fyysisesti lähempänä loppukäyttäjää oleva reunapalvelu tarjoaa pienemmän latenssin ja vähentää siirrettävän datan määrää verkon yli. Kun palvelu tuodaan vielä askel lähemmäksi, päästään paikalliseen palveluun, mikä saavuttaa vielä pienemmän latenssin kuin pilvi- ja reunapalvelut. Virtualisointitekniikka mahdollistaa helposti jaettavan ympäristön käyttöönottoa, mikä mahdollistaa tarvittavien ohjelmiston toimintojen asennuksen. Virtualisointitekniikoista kontit nousivat muiden edelle, koska IoT-laitteet omaavat suhteellisesti vähän muistia ja laskentatehoa. Kontteja käytetään palvelinpuolen palveluiden hallintaan sekä tarjoamaan puhtaan vakioidun ympäristön jokaiselle testikierrokselle. Tämä diplomityö käsittelee kahden tiedonhallinta-alustan: Apache Kafka ja MySQL pohjaisen MariaDB-tietokannan suorituskykyeroja IoT-alustan päällä. Kerätyt suorituskykymittaukset ovat latenssi ja tiedonsiirtonopeus mitaten samalla järjestelmän resurssien käyttöasteita. Vaihtelevia määriä käyttäjiä ja hyötykuormia testataan ja tulokset esitetään graafeissa. Kafka suoriutui yhtä hyvin kuin SQL ohjelmisto näissä testeissä, mutta pieniä eroja näiden välillä havaittiin

    Operating System Support for Redundant Multithreading

    Get PDF
    Failing hardware is a fact and trends in microprocessor design indicate that the fraction of hardware suffering from permanent and transient faults will continue to increase in future chip generations. Researchers proposed various solutions to this issue with different downsides: Specialized hardware components make hardware more expensive in production and consume additional energy at runtime. Fault-tolerant algorithms and libraries enforce specific programming models on the developer. Compiler-based fault tolerance requires the source code for all applications to be available for recompilation. In this thesis I present ASTEROID, an operating system architecture that integrates applications with different reliability needs. ASTEROID is built on top of the L4/Fiasco.OC microkernel and extends the system with Romain, an operating system service that transparently replicates user applications. Romain supports single- and multi-threaded applications without requiring access to the application's source code. Romain replicates applications and their resources completely and thereby does not rely on hardware extensions, such as ECC-protected memory. In my thesis I describe how to efficiently implement replication as a form of redundant multithreading in software. I develop mechanisms to manage replica resources and to make multi-threaded programs behave deterministically for replication. I furthermore present an approach to handle applications that use shared-memory channels with other programs. My evaluation shows that Romain provides 100% error detection and more than 99.6% error correction for single-bit flips in memory and general-purpose registers. At the same time, Romain's execution time overhead is below 14% for single-threaded applications running in triple-modular redundant mode. The last part of my thesis acknowledges that software-implemented fault tolerance methods often rely on the correct functioning of a certain set of hardware and software components, the Reliable Computing Base (RCB). I introduce the concept of the RCB and discuss what constitutes the RCB of the ASTEROID system and other fault tolerance mechanisms. Thereafter I show three case studies that evaluate approaches to protecting RCB components and thereby aim to achieve a software stack that is fully protected against hardware errors

    Migration of networks in multi-cloud environment

    Get PDF
    Tese de mestrado, Engenharia Informática (Arquitetura, Sistemas e Redes de Computadores) Universidade de Lisboa, Faculdade de Ciências, 2018A forma como os centros de dados e os recursos computacionais são geridos tem vindo a mudar. O uso exclusivo de servidores físicos e os complexos processos para provisionamento de software são já passado, sendo agora possível e simples usar recursos de uma terceira parte a pedido, na nuvem (cloud). A técnica central que permitiu esta evolução foi a virtualização, uma abstração dos recursos computacionais que torna o software mais independente do hardware em que é executado. Os avanços tecnológicos nesta área permitiram a migração de máquinas virtuais, agilizando ainda mais os processos de gestão e manutenção de recursos. A possibilidade de migrar máquinas virtuais libertou o software da infraestrutura física, facilitando uma série de tarefas como manutenção, balanceamento de carga, tratamento de faltas, entre outras. Hoje em dia a migração de máquinas virtuais é uma ferramenta essencial para gerir clouds, tanto públicas como privadas. Os sistemas informáticos de grande escala existentes na cloud são complexos, compostos por múltiplas partes que trabalham em conjunto para atingir os seus objectivos. O facto de os sistemas estarem intimamente ligados coloca pressão nos sistemas de comunicação e nas redes que os suportam. Esta dependência do sistema na infraestrutura de comunicação vem limitar a flexibilidade da migração de máquinas virtuais. Isto porque actualmente a gestão de uma rede é pouco flexível, limitando por exemplo a migração de VMs a uma subrede ou obrigando a um processo de reconfiguração de rede para a migração, um processo difícil, tipicamente manual e sujeito a falhas. Idealmente, a infraestrutura de que as máquinas virtuais necessitam para comunicar seria também virtual, permitindo migrar tanto as máquinas virtuais como a rede virtual. Abstrair os recursos de comunicação permitiria que todo o sistema tivesse a flexibilidade de ser transferido para outro local. Neste sentido foi recentemente proposta a migração de redes usando redes definidas por software (SDN), um novo paradigma que separa a infraestrutura de encaminhamento (plano de dados) do plano de controlo. Numa SDN a responsabilidade de tomar as decisões de controlo fica delegada num elemento logicamente centralizado, o controlador, que tem uma visão global da rede e do seu estado. Esta separação do plano de controlo do processo de encaminhamento veio facilitar a virtualização de redes. No entanto, as recentes propostas de virtualização de redes usando SDN apresentam limitações. Nomeadamente, estas soluções estão limitadas a um único centro de dados ou provedor de serviços. Esta dependência é um problema. Em primeiro lugar, confiar num único provedor ou cloud limita a disponibilidade, tornando efectivamente o provedor num ponto de falha único. Em segundo lugar, certos serviços ficam severamente limitados por recorrerem apenas a uma cloud, devido a requisitos especiais (de privacidade, por exemplo) ou mesmo legais (que podem obrigar a que, por exemplo, dados de utilizadores fiquem guardados no próprio país). Idealmente, seria possível ter a possibilidade de tirar partido de múltiplas clouds e poder, de forma transparente, aproveitar as vantagens de cada uma delas (por exemplo, umas por apresentarem custos mais reduzidos, outras pela sua localização). Tal possibilidade garantiria uma maior disponibilidade, visto que a falha de uma cloud não comprometeria todo o sistema. Além disso, poderia permitir baixar os custos porque seria possível aproveitar a variação dos preços existente entre clouds ao longo do tempo. Neste contexto multi-cloud um dos grandes desafios é conseguir migrar recursos entre clouds de forma a aproveitar os recursos existentes. Num ambiente SDN, em particular, a migração de redes é problemática porque é necessario que o controlador comunique com os elementos físicos da rede para implementar novas políticas e para que estes possam informar o controlador de novos eventos. Se a capacidade de comunicação entre o controlador e os elementos de rede for afectada (por exemplo, devido a latências elevadas de comunicação) o funcionamento da rede é também afectado. O trabalho que aqui propomos tem como objectivo desenvolver algoritmos de orquestração para migração de redes virtuais, com o objectivo de minimizar as latências na comunicação controlador-switches, em ambientes multi-cloud. Para esse efeito foi desenvolvida uma solução óptima, usando programação linear, e várias heurísticas. A solução de programação linear, sendo óptima, resulta na menor disrupção possível da ligação ao controlador. No entanto, a complexidade computacional desta solução limita a sua usabilidade, levando a tempos de execução elevados. Por esta razão são prospostas heurísticas que visam resolver o problema em tempo útil e de forma satisfatória. Os resultados das nossas experiências mostram que nas várias topologias testadas algumas heurísticas conseguem resultados próximos da solução óptima. O objectivo é atingido com tempos de execução consideravelmente inferiores.The way datacenters and computer resources are managed has been changing, from bare metal servers and complex deployment processes to on-demand cloud resources and applications. The main technology behind this evolution was virtualization. By abstracting the hardware, virtualization decoupled software from the hardware it runs on. Virtual machine (VM) migration further increased the flexibility of management and maintenance procedures. Tasks like maintenance, load balancing and fault handling were made easier. Today, the migration of virtual machines is a fundamental tool in public and private clouds. However as VMs rarely act alone, when the VMs migrate, the virtual networks should migrate too. Solutions to this problem using traditional networks have several limitations: they are integrated with the devices and are hard to manage. For these reasons the logical centralisation offered by Software-Defined Networking (SDN) architectures has been shown recently as an enabler for transparent migration of networks. In an SDN a controller remotely controls the network switches by installing flow rules that implement the policies defined by the network operator. Recent proposals are a good step forward but have problems. Namely, they are limited to a single data center or provider. The user’s dependency on a single cloud provider is a fundamental limitation. A large number of incidents involving accidental and malicious faults in cloud infrastructures show that relying on a single provider can lead to the creation of internet-scale single points of failures for cloud-based services. Furthermore, giving clients the power to choose how to use their cloud resources and the flexibility to easily change cloud providers is of great value, enabling clients to lower costs, tolerate cloud-wide outages and enhance security. The objective of this dissertation is therefore to design, implement and evaluate solutions for network migration in an environment of multiple clouds. The main goal is to schedule the migration of a network in such a way that the migration process has the least possible impact on the SDN controller’s ability to manage the network. This is achieved by creating a migration plan that aims to minimize the experienced control plane latency (i.e., the latency between the controller and the switches). We have developed an optimal solution based on a linear program, and several heuristics. Our results show that it is possible to achieve results close to the optimal solution, within reasonable time frames

    Towards Deterministic Communications in 6G Networks: State of the Art, Open Challenges and the Way Forward

    Full text link
    Over the last decade, society and industries are undergoing rapid digitization that is expected to lead to the evolution of the cyber-physical continuum. End-to-end deterministic communications infrastructure is the essential glue that will bridge the digital and physical worlds of the continuum. We describe the state of the art and open challenges with respect to contemporary deterministic communications and compute technologies: 3GPP 5G, IEEE Time-Sensitive Networking, IETF DetNet, OPC UA as well as edge computing. While these technologies represent significant technological advancements towards networking Cyber-Physical Systems (CPS), we argue in this paper that they rather represent a first generation of systems which are still limited in different dimensions. In contrast, realizing future deterministic communication systems requires, firstly, seamless convergence between these technologies and, secondly, scalability to support heterogeneous (time-varying requirements) arising from diverse CPS applications. In addition, future deterministic communication networks will have to provide such characteristics end-to-end, which for CPS refers to the entire communication and computation loop, from sensors to actuators. In this paper, we discuss the state of the art regarding the main challenges towards these goals: predictability, end-to-end technology integration, end-to-end security, and scalable vertical application interfacing. We then present our vision regarding viable approaches and technological enablers to overcome these four central challenges. Key approaches to leverage in that regard are 6G system evolutions, wireless friendly integration of 6G into TSN and DetNet, novel end-to-end security approaches, efficient edge-cloud integrations, data-driven approaches for stochastic characterization and prediction, as well as leveraging digital twins towards system awareness.Comment: 22 pages, 8 figure
    corecore