268 research outputs found

    A Taxonomy for Management and Optimization of Multiple Resources in Edge Computing

    Full text link
    Edge computing is promoted to meet increasing performance needs of data-driven services using computational and storage resources close to the end devices, at the edge of the current network. To achieve higher performance in this new paradigm one has to consider how to combine the efficiency of resource usage at all three layers of architecture: end devices, edge devices, and the cloud. While cloud capacity is elastically extendable, end devices and edge devices are to various degrees resource-constrained. Hence, an efficient resource management is essential to make edge computing a reality. In this work, we first present terminology and architectures to characterize current works within the field of edge computing. Then, we review a wide range of recent articles and categorize relevant aspects in terms of 4 perspectives: resource type, resource management objective, resource location, and resource use. This taxonomy and the ensuing analysis is used to identify some gaps in the existing research. Among several research gaps, we found that research is less prevalent on data, storage, and energy as a resource, and less extensive towards the estimation, discovery and sharing objectives. As for resource types, the most well-studied resources are computation and communication resources. Our analysis shows that resource management at the edge requires a deeper understanding of how methods applied at different levels and geared towards different resource types interact. Specifically, the impact of mobility and collaboration schemes requiring incentives are expected to be different in edge architectures compared to the classic cloud solutions. Finally, we find that fewer works are dedicated to the study of non-functional properties or to quantifying the footprint of resource management techniques, including edge-specific means of migrating data and services.Comment: Accepted in the Special Issue Mobile Edge Computing of the Wireless Communications and Mobile Computing journa

    Attention-based machine perception for intelligent cyber-physical systems

    Get PDF
    Cyber-physical systems (CPS) fundamentally change the way of how information systems interact with the physical world. They integrate the sensing, computing, and communication capabilities on heterogeneous platforms and infrastructures. Efficient and effective perception of the environment lays the foundation of proper operations in other CPS components (e.g., planning and control). Recent advances in artificial intelligence (AI) have unprecedentedly changed the way of how cyber systems extract knowledge from the collected sensing data, and understand the physical surroundings. This novel data-to-knowledge transformation capability pushes a wide spectrum of recognition tasks (e.g., visual object detection, speech recognition, and sensor-based human activity recognition) to a higher level, and opens an new era of intelligent cyber-physical systems. However, the state-of-the-art neural perception models are typically computation-intensive and sensitive to data noises, which induce significant challenges when they are deployed on resources-limited embedded platforms. This dissertation works on optimizing both the efficiency and efficacy of deep-neural- network (DNN)-based machine perception in intelligent cyber-physical systems. We extensively exploit and apply the design philosophy of attention, originated from cognitive psychology field, from multiple perspectives of machine perception. It generally means al- locating different degrees of concentration to different perceived stimuli. Specifically, we address the following five research questions: First, can we run the computation-intensive neural perception models in real-time by only looking at (i.e., scheduling) the important parts of the perceived scenes, with the cueing from an external sensor? Second, can we eliminate the dependency on the external cueing and make the scheduling framework a self- cueing system? Third, how to distribute the workloads among cameras in a distributed (visual) perception system, where multiple cameras can observe the same parts of the environment? Fourth, how to optimize the achieved perception quality when sensing data from heterogeneous locations and sensor types are collected and utilized? Fifth, how to handle sensor failures in a distributed sensing system, when the deployed neural perception models are sensitive to missing data? We formulate the above problems, and introduce corresponding attention-based solutions for each, to construct the fundamental building blocks for envisioning an attention-based machine perception system in intelligent CPS with both efficiency and efficacy guarantees

    A Survey of Fault-Tolerance Techniques for Embedded Systems from the Perspective of Power, Energy, and Thermal Issues

    Get PDF
    The relentless technology scaling has provided a significant increase in processor performance, but on the other hand, it has led to adverse impacts on system reliability. In particular, technology scaling increases the processor susceptibility to radiation-induced transient faults. Moreover, technology scaling with the discontinuation of Dennard scaling increases the power densities, thereby temperatures, on the chip. High temperature, in turn, accelerates transistor aging mechanisms, which may ultimately lead to permanent faults on the chip. To assure a reliable system operation, despite these potential reliability concerns, fault-tolerance techniques have emerged. Specifically, fault-tolerance techniques employ some kind of redundancies to satisfy specific reliability requirements. However, the integration of fault-tolerance techniques into real-time embedded systems complicates preserving timing constraints. As a remedy, many task mapping/scheduling policies have been proposed to consider the integration of fault-tolerance techniques and enforce both timing and reliability guarantees for real-time embedded systems. More advanced techniques aim additionally at minimizing power and energy while at the same time satisfying timing and reliability constraints. Recently, some scheduling techniques have started to tackle a new challenge, which is the temperature increase induced by employing fault-tolerance techniques. These emerging techniques aim at satisfying temperature constraints besides timing and reliability constraints. This paper provides an in-depth survey of the emerging research efforts that exploit fault-tolerance techniques while considering timing, power/energy, and temperature from the real-time embedded systems’ design perspective. In particular, the task mapping/scheduling policies for fault-tolerance real-time embedded systems are reviewed and classified according to their considered goals and constraints. Moreover, the employed fault-tolerance techniques, application models, and hardware models are considered as additional dimensions of the presented classification. Lastly, this survey gives deep insights into the main achievements and shortcomings of the existing approaches and highlights the most promising ones

    Design and performance evaluation of advanced QoS-enabled service-oriented architectures for the Internet of Things

    Get PDF
    The Internet of Things (IoT) is rapidly becoming reality, the cut off prices as well as the advancement in the consumer electronic field are the two main training factor. For this reason, new application scenarios are designed every days and then new challenges that must be addressed. In the future we will be surrounded by many smart devices, which will sense and act on the physical environment. Such number of smart devices will be the building block for a plethora of new smart applications which will provide to end user new enhanced service. In this context, the Quality of Service (QoS) has been recognized as a non functional key requirement for the success of the IoT. In fact, in the future IoT, we will have different applications each one with different QoS requirements, which will need to interact with a finite set of smart device each one with its QoS capabilities. Such mapping between requested and offered QoS must be managed in order to satisfy the end users. The work of this thesis focus on how to provide QoS for IoT in a cross-layer manner. In other words, our main goal is to provide QoS support that, on one hand, helps the back-end architecture to manage a wide set of IoT applications, each one with its QoS requirements, while, on the other hand, enhances the access network by adding QoS capabilities on top of smart devices. We analyzed existing QoS framework and, based on the status of the art, we derive a novel model specifically tailored for IoT systems. Then we define the procedures needed to negotiate the desired QoS level and to enforce the negotiated QoS. In particular we take care of the Thing selection problem which is raised whenever more than one thing can be exploited to obtain a certain service. Finally we considered the access network by providing different solutions to handle QoS with different grain scale. We proposed a totally transparent solution which exploits virtualization and proxying techniques to differentiate between different class of client and provide a class based prioritization schema. Then we went further by designing a QoS framework directly on top of a standard IoT protocol called Constrained Application Protocol (CoAP). We designed the QoS support to enhance the Observing paradigm which is of paramount importance especially if we consider industrial applications which might benefit from a certain level of QoS assurances

    A survey of cognitive radio handoff schemes, challenges and issues for industrial wireless sensor networks (CR-IWSN)

    Get PDF
    Industrial wireless sensor network (IWSN) applications are mostly time-bound, mission-critical and highly delay sensitive applications therefore IWSN defines strict, stringent and unique QoS requirements such as timeliness, reliability and availability. In IWSN, unlike other sensor networks, late arrival of packets or delay or disruption to an on-going communication are considered as critical failure. Also, because IWSN is deployed in the overcrowded industrial, scientific, and medical (ISM) band it is difficult to meet this unique QoS requirements due to stiff competition for bandwidth from other technologies operating in ISM band resulting in scarcity of spectrum for reliable communication and/or disruption of ongoing communication. However, cognitive radio (CR) provides more spectral opportunities through opportunistic-use of unused licensed spectrum while ensuring minimal interference to licensed users. Similarly, spectrum handoff, which is a new type of handoff in cognitive radio, has the potential to offer increase bandwidth, reliable, smooth and interference-free communication for IWSNs through opportunistic-use of spectrum, minimal switching-delays, and efficient target channel selection strategies as well as effective link recovery maintenance. As a result, a new paradigm known as cognitive radio industrial wireless sensor network (CR-IWSN) has become the interest of recent research efforts. In this paper, we highlight and discuss important QoS requirements of IWSN as well as efforts of existing IWSN standards to address the challenges. We discuss the potential and how cognitive radio and spectrum handoff can be useful in the attempt to provide real-time reliable and smooth communication for IWSNs.The Council for Scientific and Industrial Research (CSIR), South Africa [ICT: Meraka].http://www.elsevier.com/locate/jnca2018-11-01hj2017Electrical, Electronic and Computer Engineerin

    QOS-Aware and Status-Aware Adaptive Resource Allocation Framework in SDN-Based IOT Middleware

    Get PDF
    «L’Internet des objets (IdO) est une infrastructure mondiale pour la société de l’information, qui permet de disposer de services évolués en interconnectant des objets (physiques ou virtuels) grâce aux technologies de l’information et de la communication interopérables existantes ou en évolution. »[1] La vision de l’Internet des Objets est d’étendre l’Internet dans nos vies quotidiennes afin d’améliorer la qualité de vie des personnes, de sorte que le nombre d’appareils connectés et d’applications innovantes augmente très rapidement pour amener l’intelligence dans différents secteurs comme la ville, le transport ou la santé. En 2020, les études affirment que les appareils connectés à Internet devraient compter entre 26 milliards et 50 milliards d’unités. [2, 3] La qualité de service d’application IoT dépend non seulement du réseau Internet et de l’infrastructure de communication, mais aussi du fonctionnement et des performances des appareils IoT. Par conséquent, les nouveaux paramètres de QoS tels que la précision des données et la disponibilité des appareils deviennent importants pour les applications IoT par rapport aux applications Internet. Le grand nombre de dispositifs et d’applications IoT connectés à Internet, et le flux de trafic spontané entre eux rendent la gestion de la qualité de service complexe à travers l’infrastructure Internet. D’un autre côté, les dispositifs non-IP et leurs capacités limitées en termes d’énergie et de transmission créent l’environnement dynamique et contraint. De plus, l’interconnexion de bout en bout entre les dispositifs et les applications n’est pas possible. Aussi, les applications sont intéressées par les données collectées, pas à la source spécifique qui les produit. Le Software Defined Networking (SDN) est un nouveau paradigme pour les réseaux informatiques apparu récemment pour cacher la complexité de l’architecture de réseau traditionnelle (par exemple de l’Internet) et briser la fermeture des systèmes de réseau dans les fonctions de contrôle et de données. Il permet aux propriétaires et aux administrateurs de réseau de contrôler et de gérer le comportement du réseau par programme, en découplant le plan de contrôle du plan de données. SDN a le potentiel de révolutionner les réseaux informatiques classiques existants, en offrant plusieurs avantages tels que la gestion centralisée, la programmabilité du réseau, l’efficacité des coûts d’exploitation, et les innovations. Dans cette thèse, nous étudions la gestion de ressources sur l’infrastructure IoT, y compris les réseaux de transport/Internet et de détection. Nous profitons de la technologie SDN comme le futur d’Internet pour offrir un système de support QoS flexible et adaptatif pour les services IoT. Nous présentons un intergiciel basé sur SDN pour définir un cadre de gestion de QoS pour gérer les besoins spécifiques de chaque application à travers l’infrastructure IoT. De plus, nous proposons un nouveau modèle QoS qui prend en compte les préférences de QoS des applications et l’état des éléments de réseau pour allouer efficacement les ressources sur le réseau transport/Internet basé sur SDN tout en maximisant les performances du réseau.----------ABSTRACT: The Internet of Things (IoT) is an integration of various kinds of technologies, wherein heterogeneous objects with capabilities of sensing, actuation, communication, computation, networking, and storage are rapidly developed to collect the data for the users and applications. The IoT vision is to extend the Internet into our everyday lives, so the number of connected devices and innovative applications are growing very fast to bring intelligence into as many domains as possible. The QoS for IoT application not only depends on the Internet network and communication infrastructure, it is also impacted by the operation and performance of IoT sensing infrastructure. Therefore, the new QoS parameters such as data accuracy, sampling rate, and device availability become important for the IoT applications compared to the Internet applications. The huge number of the Internet-connected IoT devices and application, and the spontaneous traffic flow among them make the management of the quality of service complex across the Internet infrastructure. On the other hand, the non-IP devices and their limited capabilities in terms of energy and transmission create the dynamic environment and hinder the direct interaction between devices and applications. The quality of service is becoming one of the critical non-functional IoT element which needs research and studies. A flexible and scalable QoS management mechanism must be implemented in IoT system to keep up with the growth rate of the Internet-connected IoT devices and applications as well as their heterogeneity and diversity. The solution should address the IoT application requirements and user satisfaction while considering the system dynamism, limitations, and characteristics. Software-Defined Networking (SDN) is an emerging paradigm in computer networking which separates the control plane and the data plane of the network elements. It makes the network elements programmable via the centralized control plane. This approach enables more agile management and control over the network behavior. In this thesis, we take advantage of SDN technology as the future of the Internet to offer a flexible and adaptive QoS support scheme for the IoT services. We present an SDN-based middleware to define a QoS management framework to manage the application specific QoS needs across the IoT infrastructure including transport and sensing network. Also, we propose a new QoS model that takes into account the application QoS preferences and the network elements status to allocate effectively the resources for the applications across SDN network while maximizing network performance

    Bounding the Data-Delivery Latency of DDS Messages in Real-Time Applications

    Get PDF

    Real-Time Reliable Middleware for Industrial Internet-of-Things

    Get PDF
    This dissertation contributes to the area of adaptive real-time and fault-tolerant systems research, applied to Industrial Internet-of-Things (IIoT) systems. Heterogeneous timing and reliability requirements arising from IIoT applications have posed challenges for IIoT services to efficiently differentiate and meet such requirements. Specifically, IIoT services must both differentiate processing according to applications\u27 timing requirements (including latency, event freshness, and relative consistency of each other) and enforce the needed levels of assurance for data delivery (even as far as ensuring zero data loss). It is nontrivial for an IIoT service to efficiently differentiate such heterogeneous IIoT timing/reliability requirements to fit each application, especially when facing increasingly large data traffic and when common fault-tolerant mechanisms tend to introduce latency and latency jitters. This dissertation presents a new adaptive real-time fault-tolerant framework for IIoT systems, along with efficient and adaptive strategies to meet each IIoT application\u27s timing/reliability requirements. The contributions of the framework are demonstrated by three new IIoT middleware services: (1) Cyber-Physical Event Processing (CPEP), which both differentiates application-specific latency requirements and enforces cyber-physical timing constraints, by prioritizing, sharing, and shedding event processing. (2) Fault-Tolerant Real-Time Messaging (FRAME), which integrates real-time capabilities with a primary-backup replication system, to fit each application\u27s unique timing and loss-tolerance requirements. (3) Adaptive Real-Time Reliable Edge Computing (ARREC), which leverages heterogeneous loss-tolerance requirements and their different temporal laxities, to perform selective and lazy (yet timely) data replication, thus allowing the system to meet needed levels of loss-tolerance while reducing both the latency and bandwidth penalties that are typical of fault-tolerant sub-systems
    • …
    corecore