8 research outputs found

    Computation offloading for fast and energy-efficient edge computing

    Full text link
    In recent years, the demand for computing power has increased considerably due to the popularity of applications that involve computationally intensive tasks such as machine learning or computer vision. At the same time, users increasingly run such applications on smartphones or wearables, which have limited computational power. The research community has proposed computation offloading to meet the demand for computing power. Resource-constrained devices offload workload to remote resource providers. These providers perform the computations and return the results via the network. Computation offloading has two major benefits. First, it accelerates the execution of computationally intensive tasks and therefore reduces waiting times. Second, it decreases the energy consumption of the offloading device, which is especially attractive for devices that run on battery. After years in which cloud servers were the primary resource providers, computation offloading in edge computing systems is currently gaining popularity. Edge-based systems leverage end-user devices such as smartphones, laptops, or desktop PCs instead of cloud servers as computational resource providers. Computation offloading in such environments leads to lower latencies, better utilization of end-user devices, and lower costs in comparison to traditional cloud computing. In this thesis, we present a computation offloading approach for fast and energy-efficient edge computing. We build upon the Tasklet system – a middleware-based computation offloading system. The Tasklet system allows devices to offload heterogeneous tasks to heterogeneous providers. We address three challenges of computation offloading in the edge. First, many applications are data-intensive, which necessitates a time-consuming transfer of input data ahead of a remote execution. To overcome this challenge, we introduce DataVinci – an approach that proactively places input data on suitable devices to accelerate task execution. DataVinci additionally offers task placement strategies that exploit data locality. Second, modern applications are often user-facing and responsive. They require sub-second execution of computationally intensive tasks to ensure proper user experience. We design the decentralized scheduling approach DecArt for such applications. Third, deciding whether a local or remote execution of an upcoming task will consume less energy is non-trivial. This decision is particularly challenging as task complexity and result data size vary across executions, even if the source code is similar. We introduce the energy-aware scheduling approach Voltaire, which uses machine learning and device-specific energy profiles for making precise offloading decisions. We integrate DataVinci, DecArt, and Voltaire into the Tasklet system and evaluate the benefits in extensive experiments

    Dependable Embedded Systems

    Get PDF
    This Open Access book introduces readers to many new techniques for enhancing and optimizing reliability in embedded systems, which have emerged particularly within the last five years. This book introduces the most prominent reliability concerns from today’s points of view and roughly recapitulates the progress in the community so far. Unlike other books that focus on a single abstraction level such circuit level or system level alone, the focus of this book is to deal with the different reliability challenges across different levels starting from the physical level all the way to the system level (cross-layer approaches). The book aims at demonstrating how new hardware/software co-design solution can be proposed to ef-fectively mitigate reliability degradation such as transistor aging, processor variation, temperature effects, soft errors, etc. Provides readers with latest insights into novel, cross-layer methods and models with respect to dependability of embedded systems; Describes cross-layer approaches that can leverage reliability through techniques that are pro-actively designed with respect to techniques at other layers; Explains run-time adaptation and concepts/means of self-organization, in order to achieve error resiliency in complex, future many core systems

    INTER-ENG 2020

    Get PDF
    These proceedings contain research papers that were accepted for presentation at the 14th International Conference Inter-Eng 2020 ,Interdisciplinarity in Engineering, which was held on 8–9 October 2020, in Târgu Mureș, Romania. It is a leading international professional and scientific forum for engineers and scientists to present research works, contributions, and recent developments, as well as current practices in engineering, which is falling into a tradition of important scientific events occurring at Faculty of Engineering and Information Technology in the George Emil Palade University of Medicine, Pharmacy Science, and Technology of Târgu Mures, Romania. The Inter-Eng conference started from the observation that in the 21st century, the era of high technology, without new approaches in research, we cannot speak of a harmonious society. The theme of the conference, proposing a new approach related to Industry 4.0, was the development of a new generation of smart factories based on the manufacturing and assembly process digitalization, related to advanced manufacturing technology, lean manufacturing, sustainable manufacturing, additive manufacturing, and manufacturing tools and equipment. The conference slogan was “Europe’s future is digital: a broad vision of the Industry 4.0 concept beyond direct manufacturing in the company”

    Préserver la vie privée des individus grâce aux Systèmes Personnels de Gestion des Données

    Get PDF
    Riding the wave of smart disclosure initiatives and new privacy-protection regulations, the Personal Cloud paradigm is emerging through a myriad of solutions offered to users to let them gather and manage their whole digital life. On the bright side, this opens the way to novel value-added services when crossing multiple sources of data of a given person or crossing the data of multiple people. Yet this paradigm shift towards user empowerment raises fundamental questions with regards to the appropriateness of the functionalities and the data management and protection techniques which are offered by existing solutions to laymen users. Our work addresses these questions on three levels. First, we review, compare and analyze personal cloud alternatives in terms of the functionalities they provide and the threat models they target. From this analysis, we derive a general set of functionality and security requirements that any Personal Data Management System (PDMS) should consider. We then identify the challenges of implementing such a PDMS and propose a preliminary design for an extensive and secure PDMS reference architecture satisfying the considered requirements. Second, we focus on personal computations for a specific hardware PDMS instance (i.e., secure token with mass storage of NAND Flash). In this context, we propose a scalable embedded full-text search engine to index large document collections and manage tag-based access control policies. Third, we address the problem of collective computations in a fully-distributed architecture of PDMSs. We discuss the system and security requirements and propose protocols to enable distributed query processing with strong security guarantees against an attacker mastering many colluding corrupted nodes.Surfant sur la vague des initiatives de divulgation restreinte de données et des nouvelles réglementations en matière de protection de la vie privée, le paradigme du Cloud Personnel émerge à travers une myriade de solutions proposées aux utilisateurs leur permettant de rassembler et de gérer l'ensemble de leur vie numérique. Du côté positif, cela ouvre la voie à de nouveaux services à valeur ajoutée lors du croisement de plusieurs sources de données d'un individu ou du croisement des données de plusieurs personnes. Cependant, ce changement de paradigme vers la responsabilisation de l'utilisateur soulève des questions fondamentales quant à l'adéquation des fonctionnalités et des techniques de gestion et de protection des données proposées par les solutions existantes aux utilisateurs lambda. Notre travail aborde ces questions à trois niveaux. Tout d'abord, nous passons en revue, comparons et analysons les alternatives de cloud personnel au niveau des fonctionnalités fournies et des modèles de menaces ciblés. De cette analyse, nous déduisons un ensemble général d'exigences en matière de fonctionnalité et de sécurité que tout système personnel de gestion des données (PDMS) devrait prendre en compte. Nous identifions ensuite les défis liés à la mise en œuvre d'un tel PDMS et proposons une conception préliminaire pour une architecture PDMS étendue et sécurisée de référence répondant aux exigences considérées. Ensuite, nous nous concentrons sur les calculs personnels pour une instance matérielle spécifique du PDMS (à savoir, un dispositif personnel sécurisé avec un stockage de masse de type NAND Flash). Dans ce contexte, nous proposons un moteur de recherche plein texte embarqué et évolutif pour indexer de grandes collections de documents et gérer des politiques de contrôle d'accès basées sur des étiquettes. Troisièmement, nous abordons le problème des calculs collectifs dans une architecture entièrement distribuée de PDMS. Nous discutons des exigences d'architectures système et de sécurité et proposons des protocoles pour permettre le traitement distribué des requêtes avec de fortes garanties de sécurité contre un attaquant maîtrisant de nombreux nœuds corrompus

    Uncertainty in Artificial Intelligence: Proceedings of the Thirty-Fourth Conference

    Get PDF
    corecore