1,711 research outputs found

    Energy efficient assignment and deployment of tasks in structurally variable infrastructures

    Get PDF
    The importance of cyber-physical systems is growing very fast, being part of the Internet of Things vision. These devices generate data that could collapse the network and can not be assumed by the cloud. New technologies like Mobile Cloud Computing and Mobile Edge Computing are taking importance as solution for this issue. The idea is offloading some tasks to devices situated closer to the user device, reducing network congestion and improving applications performance (e.g., in terms of latency and energy). However, the variability of the target devices’ features and processing tasks’ requirements is very diverse, being difficult to decide which device is more adequate to deploy and run such processing tasks. Once decided, task offloading used to be done manually. Then, it is necessary a method to automatize the task assignation and deployment process. In this thesis we propose to model the structural variability of the deployment infrastructure and applications using feature models, on the basis of a SPL engineering process. Combining SPL methodology with Edge Computing, the deployment of applications is addressed as the derivation of a product. The data of the valid configurations is used by a task assignment framework, which determines the optimal tasks offloading solution in different network devices, and the resources of them that should be assigned to each task/user. Our solution provides the most energy and latency efficient deployment solution, accomplishing the QoS requirements of the application in the process.Plan Propio de Investigación de la UMA Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tec

    Verification of the Tree-Based Hierarchical Read-Copy Update in the Linux Kernel

    Full text link
    Read-Copy Update (RCU) is a scalable, high-performance Linux-kernel synchronization mechanism that runs low-overhead readers concurrently with updaters. Production-quality RCU implementations for multi-core systems are decidedly non-trivial. Giving the ubiquity of Linux, a rare "million-year" bug can occur several times per day across the installed base. Stringent validation of RCU's complex behaviors is thus critically important. Exhaustive testing is infeasible due to the exponential number of possible executions, which suggests use of formal verification. Previous verification efforts on RCU either focus on simple implementations or use modeling languages, the latter requiring error-prone manual translation that must be repeated frequently due to regular changes in the Linux kernel's RCU implementation. In this paper, we first describe the implementation of Tree RCU in the Linux kernel. We then discuss how to construct a model directly from Tree RCU's source code in C, and use the CBMC model checker to verify its safety and liveness properties. To our best knowledge, this is the first verification of a significant part of RCU's source code, and is an important step towards integration of formal verification into the Linux kernel's regression test suite.Comment: This is a long version of a conference paper published in the 2018 Design, Automation and Test in Europe Conference (DATE

    A Mobile Secure Bluetooth-Enabled Cryptographic Provider

    Get PDF
    The use of digital X509v3 public key certificates, together with different standards for secure digital signatures are commonly adopted to establish authentication proofs between principals, applications and services. One of the robustness characteristics commonly associated with such mechanisms is the need of hardware-sealed cryptographic devices, such as Hardware-Security Modules (or HSMs), smart cards or hardware-enabled tokens or dongles. These devices support internal functions for management and storage of cryptographic keys, allowing the isolated execution of cryptographic operations, with the keys or related sensitive parameters never exposed. The portable devices most widely used are USB-tokens (or security dongles) and internal ships of smart cards (as it is also the case of citizen cards, banking cards or ticketing cards). More recently, a new generation of Bluetooth-enabled smart USB dongles appeared, also suitable to protect cryptographic operations and digital signatures for secure identity and payment applications. The common characteristic of such devices is to offer the required support to be used as secure cryptographic providers. Among the advantages of those portable cryptographic devices is also their portability and ubiquitous use, but, in consequence, they are also frequently forgotten or even lost. USB-enabled devices imply the need of readers, not always and not commonly available for generic smartphones or users working with computing devices. Also, wireless-devices can be specialized or require a development effort to be used as standard cryptographic providers. An alternative to mitigate such problems is the possible adoption of conventional Bluetooth-enabled smartphones, as ubiquitous cryptographic providers to be used, remotely, by client-side applications running in users’ devices, such as desktop or laptop computers. However, the use of smartphones for safe storage and management of private keys and sensitive parameters requires a careful analysis on the adversary model assumptions. The design options to implement a practical and secure smartphone-enabled cryptographic solution as a product, also requires the approach and the better use of the more interesting facilities provided by frameworks, programming environments and mobile operating systems services. In this dissertation we addressed the design, development and experimental evaluation of a secure mobile cryptographic provider, designed as a mobile service provided in a smartphone. The proposed solution is designed for Android-Based smartphones and supports on-demand Bluetooth-enabled cryptographic operations, including standard digital signatures. The addressed mobile cryptographic provider can be used by applications running on Windows-enabled computing devices, requesting digital signatures. The solution relies on the secure storage of private keys related to X509v3 public certificates and Android-based secure elements (SEs). With the materialized solution, an application running in a Windows computing device can request standard digital signatures of documents, transparently executed remotely by the smartphone regarded as a standard cryptographic provider

    DeepX: A Software Accelerator for Low-Power Deep Learning Inference on Mobile Devices

    Get PDF
    © 2016 IEEE. Breakthroughs from the field of deep learning are radically changing how sensor data are interpreted to extract the high-level information needed by mobile apps. It is critical that the gains in inference accuracy that deep models afford become embedded in future generations of mobile apps. In this work, we present the design and implementation of DeepX, a software accelerator for deep learning execution. DeepX signif- icantly lowers the device resources (viz. memory, computation, energy) required by deep learning that currently act as a severe bottleneck to mobile adoption. The foundation of DeepX is a pair of resource control algorithms, designed for the inference stage of deep learning, that: (1) decompose monolithic deep model network architectures into unit- blocks of various types, that are then more efficiently executed by heterogeneous local device processors (e.g., GPUs, CPUs); and (2), perform principled resource scaling that adjusts the architecture of deep models to shape the overhead each unit-blocks introduces. Experiments show, DeepX can allow even large-scale deep learning models to execute efficently on modern mobile processors and significantly outperform existing solutions, such as cloud-based offloading

    Mobile cloud computing for computation offloading: Issues and challenges

    Get PDF
    International audienceDespite the evolution and enhancements that mobile devices have experienced, they are still considered as limited computing devices. Today, users become more demanding and expect to execute computational intensive applications on their smartphone devices. Therefore, Mobile Cloud Computing (MCC) integrates mobile computing and Cloud Computing (CC) in order to extend capabilities of mobile devices using offloading techniques. Computation offloading tackles limitations of Smart Mobile Devices (SMDs) such as limited battery lifetime, limited processing capabilities , and limited storage capacity by offloading the execution and workload to other rich systems with better performance and resources. This paper presents the current offloading frameworks, computation offloading techniques, and analyzes them along with their main critical issues. In addition , it explores different important parameters based on which the frameworks are implemented such as offloading method and level of partitioning. Finally, it summarizes the issues in offloading frameworks in the MCC domain that requires further research

    Sparsification and Separation of Deep Learning Layers for Constrained Resource Inference on Wearables

    Get PDF
    Deep learning has revolutionized the way sensor data are analyzed and interpreted. The accuracy gains these approaches o↵er make them attractive for the next generation of mobile, wearable and embedded sensory applications. However, state-of-the-art deep learning algorithms typically require a significant amount of device and processor resources, even just for the inference stages that are used to discriminate high-level classes from low-level data. The limited availability of memory, computation, and energy on mobile and embedded platforms thus pose a significant challenge to the adoption of these powerful learning techniques. In this paper, we propose SparseSep, a new approach that leverages the sparsification of fully connected layers and separation of convolutional kernels to reduce the resource requirements of popular deep learning algorithms. As a result, SparseSep allows large-scale DNNs and CNNs to run eciently on mobile and embedded hardware with only minimal impact on inference accuracy. We experiment using SparseSep across a variety of common processors such as the Qualcomm Snapdragon 400, ARM Cortex M0 and M3, and Nvidia Tegra K1, and show that it allows inference for various deep models to execute more eciently; for example, on average requiring 11.3 times less memory and running 13.3 times faster on these representative platforms
    corecore