9 research outputs found

    Zero-touch realization of Pervasive Artificial Intelligence-as-a-service in 6G networks

    Full text link
    The vision of the upcoming 6G technologies, characterized by ultra-dense network, low latency, and fast data rate is to support Pervasive AI (PAI) using zero-touch solutions enabling self-X (e.g., self-configuration, self-monitoring, and self-healing) services. However, the research on 6G is still in its infancy, and only the first steps have been taken to conceptualize its design, investigate its implementation, and plan for use cases. Toward this end, academia and industry communities have gradually shifted from theoretical studies of AI distribution to real-world deployment and standardization. Still, designing an end-to-end framework that systematizes the AI distribution by allowing easier access to the service using a third-party application assisted by a zero-touch service provisioning has not been well explored. In this context, we introduce a novel platform architecture to deploy a zero-touch PAI-as-a-Service (PAIaaS) in 6G networks supported by a blockchain-based smart system. This platform aims to standardize the pervasive AI at all levels of the architecture and unify the interfaces in order to facilitate the service deployment across application and infrastructure domains, relieve the users worries about cost, security, and resource allocation, and at the same time, respect the 6G stringent performance requirements. As a proof of concept, we present a Federated Learning-as-a-service use case where we evaluate the ability of our proposed system to self-optimize and self-adapt to the dynamics of 6G networks in addition to minimizing the users' perceived costs.Comment: IEEE Communications Magazin

    Multi-Agent Reinforcement Learning for Network Selection and Resource Allocation in Heterogeneous Multi-RAT Networks

    Get PDF
    The rapid production of mobile devices along with the wireless applications boom is continuing to evolve daily. This motivates the exploitation of wireless spectrum using multiple Radio Access Technologies (multi-RAT) and developing innovative network selection techniques to cope with such intensive demand while improving Quality of Service (QoS). Thus, we propose a distributed framework for dynamic network selection at the edge level, and resource allocation at the Radio Access Network (RAN) level, while taking into consideration diverse applications' characteristics. In particular, our framework employs a deep Multi-Agent Reinforcement Learning (DMARL) algorithm, that aims to maximize the edge nodes' quality of experience while extending the battery lifetime of the nodes and leveraging adaptive compression schemes. Indeed, our framework enables data transfer from the network's edge nodes, with multi-RAT capabilities, to the cloud in a cost and energy-efficient manner, while maintaining QoS requirements of different supported applications. Our results depict that our solution outperforms state-of-the-art techniques of network selection in terms of energy consumption, latency, and cost

    DroneRF dataset : A dataset of drones for RF-based detection, classification and identification

    Get PDF
    Modern technology has pushed us into the information age, making it easier to generate and record vast quantities of new data. Datasets can help in analyzing the situation to give a better understanding, and more importantly, decision making. Consequently, datasets, and uses to which they can be put, have become increasingly valuable commodities. This article describes the DroneRF dataset: a radio frequency (RF) based dataset of drones functioning in different modes, including off, on and connected, hovering, flying, and video recording. The dataset contains recordings of RF activities, composed of 227 recorded segments collected from 3 different drones, as well as recordings of background RF activities with no drones. The data has been collected by RF receivers that intercepts the drone's communications with the flight control module. The receivers are connected to two laptops, via PCIe cables, that runs a program responsible for fetching, processing and storing the sensed RF data in a database. An example of how this dataset can be interpreted and used can be found in the related research article “RF-based drone detection and identification using deep learning approaches: an initiative towards a large open source drone database” (Al-Sa'd et al., 2019).publishedVersionPeer reviewe

    Incentive-based Resource Allocation for Mobile Edge Learning

    No full text
    Mobile Edge Learning (MEL) is a learning paradigm that facilitates training of Machine Learning (ML) models over resource-constrained edge devices. MEL consists of an orchestrator, which represents the model owner of the learning task, and learners, which own the data locally. Enabling the learning process requires the model owner to motivate learners to train the ML model on their local data and allocate sufficient resources. The time limitations and the possible existence of multiple orchestrators open the doors for the resource allocation problem. As such, we model the incentive mechanism and resource allocation as a multi-round Stackelberg game, and propose a Payment-based Time Allocation (PBTA) algorithm to solve the game. In PBTA, orchestrators first determine the pricing, then the learners allocate each orchestrator a timeslot and determine the amount of data and resources for each orchestrator. Finally, we evaluate the PBTA performance and compare it against a recent state-of-the-art approach.This research is supported by a grant from the Natural Sciences and Engineering Research Council of Canada (NSERC) under grant number: ALLRP 549919-20, and partially supported by NPRP grant # NPRP13S-0205-200265

    On the Modeling of Reliability in Extreme Edge Computing Systems

    No full text
    Extreme edge computing (EEC) refers to the end-most part of edge computing wherein computational tasks and edge services are deployed only on extreme edge devices (EEDs). EEDs are consumer or user-owned devices that offer computational resources, which may consist of wearable devices, personal mobile devices, drones, etc. Such devices are opportunistically or naturally present within the proximity of other user devices. Hence, utilizing EEDs to deploy edge services or perform computational tasks fulfills the promise of edge computing of bringing the services and computation as close as possible to the end-users. However, the lack of knowledge and control over the EEDs computational resources raises a red flag, since executing the computational tasks successfully becomes doubtful. To this end, we aim to study the EEDs randomness from the computational perspective, and how reliable is an EED in terms of executing the tasks on time. Specifically, we provide a reliability model for the EEDs that takes into account the probabilistic nature of the availability of the EEDs' computational resources. Moreover, we study the reliability of executing different types of computational tasks in EEC systems that are distributed across the EEDs. Lastly, we carry out experimental results to analyze the EEDs and the EEC systems' reliability behavior in different scenarios.This work was made possible by NPRP grant # NPRP13S-0205-200265 from the Qatar National Research Fund (a member of Qatar Foundation). This work was also supported by a grant from the Natural Sciences and Engineering Research Council of Canada (NSERC) under grant number: [ALLRP 549919-20]

    Energy-Efficient Device Assignment and Task Allocation in Multi-Orchestrator Mobile Edge Learning

    No full text
    Mobile Edge Learning (MEL) is a decentralized learning paradigm that enables resource-constrained IoT devices to either learn a shared model without sharing the data, or to distribute the learning task with the data to other IoT devices and utilize their available resources. In the former case, IoT devices (a.k.a learners) need to be assigned an orchestrator to facilitate the learning and models' aggregation from different learners. Whereas in the latter case, IoT devices act as orchestrators and look for learners with available resources to distribute the learning task to. However, the coexistence of multiple learning problems in an environment with limited resources poses the learners-orchestrator assignment problem. To this end, we aim to develop an energy-efficient learner assignment and task allocation scheme, in which each orchestrator gets assigned a group of learners based on their communication channel qualities and computational resources. We formulate and solve a multi-objective optimization problem to minimize the total energy consumption and maximize the learning accuracy. To reduce the solution complexity, we also propose a lightweight heuristic algorithm that can achieve near-optimal performance. The conducted simulations show that our proposed approaches can execute multiple learning tasks efficiently and significantly reduce energy consumption compared to current state-of-art methods.This work was made possible by NPRP grant # NPRP12S-0305-190231 from the Qatar National Research Fund (a member of Qatar Foundation). We also acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), [RGPIN-2020-06919]. The findings achieved herein are solely the responsibility of the authors

    On Designing Smart Agents for Service Provisioning in Blockchain-Powered Systems

    No full text
    Service provisioning systems assign users to service providers according to allocation criteria that strike an optimal trade-off between users' Quality of Experience (QoE) and the operation cost endured by providers. These systems have been leveraging Smart Contracts (SCs) to add trust and transparency to their criteria. However, deploying fixed allocation criteria in SCs does not necessarily lead to the best performance over time since the blockchain participants join and leave flexibly, and their load varies with time, making the original allocation sub-optimal. Furthermore, updating the criteria manually at every variation in the blockchain jeopardizes the autonomous and independent execution promised by SCs. Thus, we propose a set of light-weight agents for SCs that are capable of optimizing the performance. We also propose using online learning SCs, empowered by Deep Reinforcement Learning (DRL) agent, that leverage the chained data to continuously self-tune its allocation criteria. We show that the proposed learning-assisted method achieves superior performance on the combinatorial multi-stage allocation problem while still being executable in real-time. We also compare the proposed approach with standard heuristics as well as planning methods. Results show a significant performance advantage over heuristics and better adaptability to the dynamic nature of blockchain networks

    RL-Assisted Energy-Aware User-Edge Association for IoT-based Hierarchical Federated Learning

    No full text
    The extremely heavy global reliance on IoT devices is causing enormous amounts of data to be gathered and shared in IoT networks. Such data need to efficiently be used in training and deploying of powerful artificially intelligent models for better future event detection and decision making. However, IoT devices suffer from many limitations regarding their energy budget, computational power, and storage space. Therefore, efficient solutions have to be studied and proposed for addressing these limitations. In this paper, we propose an energy-efficient Hierarchical Federated Learning (HFL) framework with optimized client-edge association and resource allocation. This was done by formulating and solving a communication energy minimization problem that takes into consideration the data distribution of the clients and the communication latency between the clients and edges. We also implement an alternative less complex solution leveraging Reinforcement Learning (RL) that provides a fast user-edge association and resource allocation response in highly dynamic HFL networks. The proposed two solutions are compared with several state-of-the-art client-edge association techniques, leveraging MNIST dataset. Moreover, we study the trade-off between minimizing the per-round energy consumption and Kullback-Leibler Divergence (KLD) of the data distribution, and its effect on the total energy consumption.This work was made possible by NPRP grant # NPRP13S-0205-200265 from the Qatar National Research Fund (a member of Qatar Foundation)

    Patient-Driven Network Selection in multi-RAT Health Systems Using Deep Reinforcement Learning

    No full text
    The recent pandemic along with the rapid increase in the number of patients that require continuous remote monitoring imposes several challenges to support the high quality of services (QoS) in remote health applications. Remote-health (r-health) systems typically demand intense data collection from different locations within a strict time constraint to support sustainable health services. On the contrary, the end-users with mobile devices have limited batteries that need to run for a long time, while continuously acquiring and transmitting health-related information. Thus, this paper proposes an adaptive deep reinforcement learning (DRL) framework for network selection over heteroge-neous r-health systems to enable continuous remote monitoring for patients with chronic diseases. The proposed framework allows for selecting the optimal network(s) that maximizes the accumulative reward of the patients while considering the patients' state. Moreover, it adopts an adaptive compression scheme at the patient level to further optimize the energy consumption, cost, and latency. Our results depict that the proposed framework outperforms the state-of-the-art techniques in terms of battery lifetime and reward maximization.This work was made possible by NPRP grant # NPRP12S-0305-190231 from the Qatar National Research Fund (a member of Qatar Foundation). The findings achieved herein are solely the responsibility of the authors
    corecore