20 research outputs found

    A Survey of Asynchronous Programming Using Coroutines in the Internet of Things and Embedded Systems

    Full text link
    Many Internet of Things and embedded projects are event-driven, and therefore require asynchronous and concurrent programming. Current proposals for C++20 suggest that coroutines will have native language support. It is timely to survey the current use of coroutines in embedded systems development. This paper investigates existing research which uses or describes coroutines on resource-constrained platforms. The existing research is analysed with regard to: software platform, hardware platform and capacity; use cases and intended benefits; and the application programming interface design used for coroutines. A systematic mapping study was performed, to select studies published between 2007 and 2018 which contained original research into the application of coroutines on resource-constrained platforms. An initial set of 566 candidate papers were reduced to only 35 after filters were applied, revealing the following taxonomy. The C & C++ programming languages were used by 22 studies out of 35. As regards hardware, 16 studies used 8- or 16-bit processors while 13 used 32-bit processors. The four most common use cases were concurrency (17 papers), network communication (15), sensor readings (9) and data flow (7). The leading intended benefits were code style and simplicity (12 papers), scheduling (9) and efficiency (8). A wide variety of techniques have been used to implement coroutines, including native macros, additional tool chain steps, new language features and non-portable assembly language. We conclude that there is widespread demand for coroutines on resource-constrained devices. Our findings suggest that there is significant demand for a formalised, stable, well-supported implementation of coroutines in C++, designed with consideration of the special needs of resource-constrained devices, and further that such an implementation would bring benefits specific to such devices.Comment: 22 pages, 8 figures, to be published in ACM Transactions on Embedded Computing Systems (TECS

    Design of a WSN Platform for Long-Term Environmental Monitoring for IoT Applications

    Get PDF
    The Internet of Things (IoT) provides a virtual view, via the Internet Protocol, to a huge variety of real life objects, ranging from a car, to a teacup, to a building, to trees in a forest. Its appeal is the ubiquitous generalized access to the status and location of any "thing" we may be interested in. Wireless sensor networks (WSN) are well suited for long-term environmental data acquisition for IoT representation. This paper presents the functional design and implementation of a complete WSN platform that can be used for a range of long-term environmental monitoring IoT applications. The application requirements for low cost, high number of sensors, fast deployment, long lifetime, low maintenance, and high quality of service are considered in the specification and design of the platform and of all its components. Low-effort platform reuse is also considered starting from the specifications and at all design levels for a wide array of related monitoring application

    Flexible Composition of Robot Logic with Computer Vision Services

    Get PDF
    Vision-based robotics is an ever-growing field within industrial automation. Demands for greater flexibility and higher quality motivate manufacturing companies to adopt these technologies for such tasks as material handling, assembly, and inspection. In addition to the direct use in the manufacturing setting, robots combined with vision systems serve as highly flexible means for realization of prototyping test-beds in the R&D context.Traditionally, the problem areas of robotics and computer vision are attacked separately. An exception is the study of vision-based servo control, the focus of which constitutes control-theoretic aspects of vision-based robot guidance under assumption that robot joints can be controlled directly. The missing part is a systemic approach to implementing robotic application with vision sensing given industrial robots constrained by their programming interface. This thesis targets the development process of vision-based robotic systems in an event-driven environment. It focuses on design and composition of three functional components: (1) robot control function, (2) image acquisition function, and (3) image processing function. The thesis approaches its goal by a combination of laboratory results, a case study of an industrial company (Kongsberg Automotive AS), and formalization of computational abstractions and architectural solutions. The image processing function is tackled with the application of reactive pipelines. The proposed system development method allows for smooth transition from early-stage vision algorithm prototyping to the integration phase. The image acquisition function in this thesis is exposed in a service-oriented manner with the help of a flexible set of concurrent computational primitives. To realize control of industrial robots, a distributed architecture is devised, which supports composability of communication-heavy robot logic, as well as flexible coupling of the robot control node with vision services

    The use of extended reality and machine learning to improve healthcare and promote greenhealth

    Get PDF
    Com a Quarta Revolução Industrial, a propagação da Internet das Coisas, o avanço nas ĂĄreas de InteligĂȘncia Artificial e de Machine Learning atĂ© Ă  migração para a Computação em Nuvem, o termo "Ambientes Inteligentes" cada vez mais deixa de ser uma idealização para se tornar realidade. Da mesma forma as tecnologias de Realidade Extendida tambĂ©m elas tĂȘm aumentado a sua presença no mundo tecnolĂłgico apĂłs um "perĂ­odo de hibernação", desde a popularização do conceito de Metaverse assim como a entrada das grandes empresas informĂĄticas como a Apple e a Google num mercado onde a Realidade Virtual, Realidade Aumentada e Realidade Mista eram dominadas por empresas com menos experiĂȘncia no desenvolvimento de sistemas (e.g. Meta), reconhecimento a nĂ­vel mundial (e.g. HTC Vive), ou suporte financeiro e confiança do mercado. Esta tese tem como foco o estudo do potencial uso das tecnologias de Realidade Estendida de forma a promover SaĂșde Verde assim como seu uso em Hospitais Inteligentes, uma das variantes de Ambientes Inteligentes, incorporando Machine Learning e Computer Vision, como ferramenta de suporte e de melhoria de cuidados de saĂșde, tanto do ponto de vista do profissional de saĂșde como do paciente, atravĂ©s duma revisĂŁo literarĂĄria e anĂĄlise da atualidade. Resultando na elaboração de um modelo conceptual com a sugestĂŁo de tecnologias a poderem ser usadas para alcançar esse cenĂĄrio selecionadas pelo seu potencial, sendo posteriormente descrito o desenvolvimento de protĂłtipos de partes do modelo conceptual para Óculos de Realidade Extendida como validação de conceito.With the Fourth Industrial Revolution, the spread of the Internet of Things, the advance in the areas of Artificial Intelligence and Machine Learning until the migration to Cloud Computing, the term "Intelligent Environments" increasingly ceases to be an idealization to become reality. Likewise, Extended Reality technologies have also increased their presence in the technological world after a "hibernation period", since the popularization of the Metaverse concept, as well as the entry of large computer companies such as Apple and Google into a market where Virtual Reality, Augmented Reality and Mixed Reality were dominated by companies with less experience in system development (e.g. Meta), worldwide recognition (e.g. HTC Vive) or financial support and trust in the market. This thesis focuses on the study of the potential use of Extended Reality technologies in order to promote GreenHealth as well as their use in Smart Hospitals, one of the variants of Smart Environments, incorporating Machine Learning and Computer Vision, as a tool to support and improve healthcare, both from the point of view of the health professional and the patient, through a literature review and analysis of the current situation. Resulting in the elaboration of a conceptual model with the suggestion of technologies that can be used to achieve this scenario selected for their potential, and then the development of prototypes of parts of the conceptual model for Extended Reality Headsets as concept validation

    Fault tolerant software technology for distributed computing system

    Get PDF
    Issued as Monthly reports [nos. 1-23], Interim technical report, Technical guide books [nos. 1-2], and Final report, Project no. G-36-64

    Actas da 10ÂȘ ConferĂȘncia sobre Redes de Computadores

    Get PDF
    Universidade do MinhoCCTCCentro AlgoritmiCisco SystemsIEEE Portugal Sectio

    Energy Measurements of High Performance Computing Systems: From Instrumentation to Analysis

    Get PDF
    Energy efficiency is a major criterion for computing in general and High Performance Computing in particular. When optimizing for energy efficiency, it is essential to measure the underlying metric: energy consumption. To fully leverage energy measurements, their quality needs to be well-understood. To that end, this thesis provides a rigorous evaluation of various energy measurement techniques. I demonstrate how the deliberate selection of instrumentation points, sensors, and analog processing schemes can enhance the temporal and spatial resolution while preserving a well-known accuracy. Further, I evaluate a scalable energy measurement solution for production HPC systems and address its shortcomings. Such high-resolution and large-scale measurements present challenges regarding the management of large volumes of generated metric data. I address these challenges with a scalable infrastructure for collecting, storing, and analyzing metric data. With this infrastructure, I also introduce a novel persistent storage scheme for metric time series data, which allows efficient queries for aggregate timelines. To ensure that it satisfies the demanding requirements for scalable power measurements, I conduct an extensive performance evaluation and describe a productive deployment of the infrastructure. Finally, I describe different approaches and practical examples of analyses based on energy measurement data. In particular, I focus on the combination of energy measurements and application performance traces. However, interweaving fine-grained power recordings and application events requires accurately synchronized timestamps on both sides. To overcome this obstacle, I develop a resilient and automated technique for time synchronization, which utilizes crosscorrelation of a specifically influenced power measurement signal. Ultimately, this careful combination of sophisticated energy measurements and application performance traces yields a detailed insight into application and system energy efficiency at full-scale HPC systems and down to millisecond-range regions.:1 Introduction 2 Background and Related Work 2.1 Basic Concepts of Energy Measurements 2.1.1 Basics of Metrology 2.1.2 Measuring Voltage, Current, and Power 2.1.3 Measurement Signal Conditioning and Analog-to-Digital Conversion 2.2 Power Measurements for Computing Systems 2.2.1 Measuring Compute Nodes using External Power Meters 2.2.2 Custom Solutions for Measuring Compute Node Power 2.2.3 Measurement Solutions of System Integrators 2.2.4 CPU Energy Counters 2.2.5 Using Models to Determine Energy Consumption 2.3 Processing of Power Measurement Data 2.3.1 Time Series Databases 2.3.2 Data Center Monitoring Systems 2.4 Influences on the Energy Consumption of Computing Systems 2.4.1 Processor Power Consumption Breakdown 2.4.2 Energy-Efficient Hardware Configuration 2.5 HPC Performance and Energy Analysis 2.5.1 Performance Analysis Techniques 2.5.2 HPC Performance Analysis Tools 2.5.3 Combining Application and Power Measurements 2.6 Conclusion 3 Evaluating and Improving Energy Measurements 3.1 Description of the Systems Under Test 3.2 Instrumentation Points and Measurement Sensors 3.2.1 Analog Measurement at Voltage Regulators 3.2.2 Instrumentation with Hall Effect Transducers 3.2.3 Modular Instrumentation of DC Consumers 3.2.4 Optimal Wiring for Shunt-Based Measurements 3.2.5 Node-Level Instrumentation for HPC Systems 3.3 Analog Signal Conditioning and Analog-to-Digital Conversion 3.3.1 Signal Amplification 3.3.2 Analog Filtering and Analog-To-Digital Conversion 3.3.3 Integrated Solutions for High-Resolution Measurement 3.4 Accuracy Evaluation and Calibration 3.4.1 Synthetic Workloads for Evaluating Power Measurements 3.4.2 Improving and Evaluating the Accuracy of a Single-Node Measuring System 3.4.3 Absolute Accuracy Evaluation of a Many-Node Measuring System 3.5 Evaluating Temporal Granularity and Energy Correctness 3.5.1 Measurement Signal Bandwidth at Different Instrumentation Points 3.5.2 Retaining Energy Correctness During Digital Processing 3.6 Evaluating CPU Energy Counters 3.6.1 Energy Readouts with RAPL 3.6.2 Methodology 3.6.3 RAPL on Intel Sandy Bridge-EP 3.6.4 RAPL on Intel Haswell-EP and Skylake-SP 3.7 Conclusion 4 A Scalable Infrastructure for Processing Power Measurement Data 4.1 Requirements for Power Measurement Data Processing 4.2 Concepts and Implementation of Measurement Data Management 4.2.1 Message-Based Communication between Agents 4.2.2 Protocols 4.2.3 Application Programming Interfaces 4.2.4 Efficient Metric Time Series Storage and Retrieval 4.2.5 Hierarchical Timeline Aggregation 4.3 Performance Evaluation 4.3.1 Benchmark Hardware Specifications 4.3.2 Throughput in Symmetric Configuration with Replication 4.3.3 Throughput with Many Data Sources and Single Consumers 4.3.4 Temporary Storage in Message Queues 4.3.5 Persistent Metric Time Series Request Performance 4.3.6 Performance Comparison with Contemporary Time Series Storage Solutions 4.3.7 Practical Usage of MetricQ 4.4 Conclusion 5 Energy Efficiency Analysis 5.1 General Energy Efficiency Analysis Scenarios 5.1.1 Live Visualization of Power Measurements 5.1.2 Visualization of Long-Term Measurements 5.1.3 Integration in Application Performance Traces 5.1.4 Graphical Analysis of Application Power Traces 5.2 Correlating Power Measurements with Application Events 5.2.1 Challenges for Time Synchronization of Power Measurements 5.2.2 Reliable Automatic Time Synchronization with Correlation Sequences 5.2.3 Creating a Correlation Signal on a Power Measurement Channel 5.2.4 Processing the Correlation Signal and Measured Power Values 5.2.5 Common Oversampling of the Correlation Signals at Different Rates 5.2.6 Evaluation of Correlation and Time Synchronization 5.3 Use Cases for Application Power Traces 5.3.1 Analyzing Complex Power Anomalies 5.3.2 Quantifying C-State Transitions 5.3.3 Measuring the Dynamic Power Consumption of HPC Applications 5.4 Conclusion 6 Summary and Outloo

    A Scalable and Secure System Architecture for Smart Buildings

    Get PDF
    Recent years has seen profound changes in building technologies both in Europe and worldwide. With the emergence of Smart Grid and Smart City concepts, the Smart Building has attracted considerable attention and rapid development. The introduction of novel information and communication technologies (ICT) enables an optimized resource utilization while improving the building performance and occupants' satisfaction over a broad spectrum of operations. However, literature and industry have drawn attention to certain barriers and challenges that inhibit its universal adoption. The Smart Building is a cyber-physical system, which as a whole is more than the sum of its parts. The heterogeneous combination of systems, processes, and practices requires a multidisciplinary research. This work proposes and validates a systems engineering approach to the investigation of the identified challenges and the development of a viable architecture for the future Smart Building. Firstly, a data model for the building management system (BMS) enables a semantic abstraction of both the ICT and the building construction. A high-level application programming interface (API) facilitates the creation of generic management algorithms and external applications, independent from each Smart Building instance, promoting the intelligence portability and lowering the cost. Moreover, the proposed architecture ensures the scalability regardless of the occupant activities and the complexity of the optimization algorithms. Secondly, a real-time message-oriented middleware, as a distributed embedded architecture within the building, empowers the interoperability of the ICT devices and networks and their integration into the BMS. The middleware scales to any building construction regardless of the devices' performance and connectivity limitations, while a secure architecture ensures the integrity of data and operations. An extensive performance and energy efficiency study validates the proposed design. A "building-in-the-loop" emulation system, based on discrete-event simulation, virtualizes the Smart Building elements (e.g., loads, storage, generation, sensors, actuators, users, etc.). The high integration with the message-oriented middleware keeps the BMS agnostic to the virtual nature of the emulated instances. Its cooperative multitasking and immerse parallelism allow the concurrent emulation of hundreds of elements in real time. The virtualization facilitates the development of energy management strategies and financial viability studies on the exact building and occupant activities without a prior investment in the necessary infrastructure. This work concludes with a holistic system evaluation using a case study of a university building as a practical retrofitting estimation. It illustrates the system deployment, and highlights how a currently under development energy management system utilizes the BMS and its data analytics for demand-side management applications

    Replication of non-deterministic objects

    Get PDF
    This thesis discusses replication of non-deterministic objects in distributed systems to achieve fault tolerance against crash failures. The objects replicated are the virtual nodes of a distributed application. Replication is viewed as an issue that is to be dealt with only during the configuration of a distributed application and that should not affect the development of the application. Hence, replication of virtual nodes should be transparent to the application. Like all measures to achieve fault tolerance, replication introduces redundancy in the system. Not surprisingly, the main difficulty is guaranteeing the consistency of all replicas such that they behave in the same way as if the object was not replicated (replication transparency). This is further complicated if active objects (like virtual nodes) are replicated, and these objects themselves can be clients of still further objects in the distributed application. The problems of replication of active non-deterministic objects are analyzed in the context of distributed Ada 95 applications. The ISO standard for Ada 95 defines a model for distributed execution based on remote procedure calls (RPC). Virtual nodes in Ada 95 use this as their sole communication paradigm, but they may contain tasks to execute activities concurrently, thus making the execution potentially non-deterministic due to implicit timing dependencies. Such non-determinism cannot be avoided by choosing deterministic tasking policies. I present two different approaches to maintain replica consistency despite this non-determinism. In a first approach, I consider the run-time support of Ada 95 as a black box (except for the part handling remote communications). This corresponds to a non-deterministic computation model. I show that replication of non-deterministic virtual nodes requires that remote procedure calls are implemented as nested transactions. Unfortunately, effects of failures are not local to the replicas of a virtual node: when a failure occurs, nested remote calls made to other virtual nodes must be undone. Also, using transactional semantics for RPCs necessitates a compromise regarding transparency: the application must identify global state for it cannot be determined reliably in an automatic way. Further study reveals that this approach cannot be implemented in a transparent way at all because the consistency criterion of Ada 95 (linearizability) is much weaker than that of transactions (serializability). An execution of remote procedure calls as transactions may thus lead to incompatibilities with the semantics of the programming language. If remotely called subprograms on a replicated virtual node perform partial operations, i.e., entry calls on global protected objects, deadlocks that cannot be broken can occur in certain cases. Such deadlocks do not occur when the virtual node is not replicated. The transactional semantics of RPCs must therefore be exposed to the application. A second approach is based on a piecewise deterministic computation model, i.e., the execution of a virtual node is seen as a sequence of deterministic state intervals. Whenever a non-deterministic event occurs, a new state interval is started. I study replica organization under this computation model (semi-active replication). In this model, all non-deterministic decisions are made on one distinguished replica (the leader), while all other replicas (the followers) are forced to follow the same sequence of non-deterministic events. I show that it suffices to synchronize the followers with the leader upon each observable event, i.e., when the leader sends a message to some other virtual node. It is not necessary to synchronize upon each and every non-deterministic event — which would incur a prohibitively high overhead. Non-deterministic events occurring on the leader between observable events are logged and sent to the followers just before the leader executes an observable event. Consequently, it is guaranteed that the followers will reach the same state as the leader, and thus the effects of failures remain mostly local to the replicas. A prototype implementation called RAPIDS (Replicated Ada Partitions In Distributed Systems) serves as a proof of concept for this second approach, demonstrating its feasibility. RAPIDS is an Ada 95 implementation of a replication manager for semi-active replication for the GNAT development system for Ada 95. It is entirely contained within the run-time support and hence largely transparent for the application

    Cooperative Data Backup for Mobile Devices

    Get PDF
    Les dispositifs informatiques mobiles tels que les ordinateurs portables, assistants personnels et tĂ©lĂ©phones portables sont de plus en plus utilisĂ©s. Cependant, bien qu'ils soient utilisĂ©s dans des contextes oĂč ils sont sujets Ă  des endommagements, Ă  la perte, voire au vol, peu de mĂ©canismes permettent d'Ă©viter la perte des donnĂ©es qui y sont stockĂ©es. Dans cette thĂšse, nous proposons un service de sauvegarde de donnĂ©es coopĂ©ratif pour rĂ©pondre Ă  ce problĂšme. Cette approche tire parti de communications spontanĂ©es entre de tels dispositifs, chaque dispositif stockant une partie des donnĂ©es des dispositifs rencontrĂ©s. Une Ă©tude analytique des gains de cette approche en termes de sĂ»retĂ© de fonctionnement est proposĂ©e. Nous Ă©tudions Ă©galement des mĂ©canismes de stockage rĂ©parti adaptĂ©s. Les problĂšmes de coopĂ©ration entre individus mutuellement suspicieux sont Ă©galement abordĂ©s. Enfin, nous dĂ©crivons notre mise en oeuvre du service de sauvegarde coopĂ©rative. ABSTRACT : Mobile devices such as laptops, PDAs and cell phones are increasingly relied on but are used in contexts that put them at risk of physical damage, loss or theft. However, few mechanisms are available to reduce the risk of losing the data stored on these devices. In this dissertation, we try to address this concern by designing a cooperative backup service for mobile devices. The service leverages encounters and spontaneous interactions among participating devices, such that each device stores data on behalf of other devices. We first provide an analytical evaluation of the dependability gains of the proposed service. Distributed storage mechanisms are explored and evaluated. Security concerns arising from thecooperation among mutually suspicious principals are identified, and core mechanisms are proposed to allow them to be addressed. Finally, we present our prototype implementation of the cooperative backup servic
    corecore