12 research outputs found

    A survey of multi-access edge computing in 5G and beyond : fundamentals, technology integration, and state-of-the-art

    Get PDF
    Driven by the emergence of new compute-intensive applications and the vision of the Internet of Things (IoT), it is foreseen that the emerging 5G network will face an unprecedented increase in traffic volume and computation demands. However, end users mostly have limited storage capacities and finite processing capabilities, thus how to run compute-intensive applications on resource-constrained users has recently become a natural concern. Mobile edge computing (MEC), a key technology in the emerging fifth generation (5G) network, can optimize mobile resources by hosting compute-intensive applications, process large data before sending to the cloud, provide the cloud-computing capabilities within the radio access network (RAN) in close proximity to mobile users, and offer context-aware services with the help of RAN information. Therefore, MEC enables a wide variety of applications, where the real-time response is strictly required, e.g., driverless vehicles, augmented reality, robotics, and immerse media. Indeed, the paradigm shift from 4G to 5G could become a reality with the advent of new technological concepts. The successful realization of MEC in the 5G network is still in its infancy and demands for constant efforts from both academic and industry communities. In this survey, we first provide a holistic overview of MEC technology and its potential use cases and applications. Then, we outline up-to-date researches on the integration of MEC with the new technologies that will be deployed in 5G and beyond. We also summarize testbeds and experimental evaluations, and open source activities, for edge computing. We further summarize lessons learned from state-of-the-art research works as well as discuss challenges and potential future directions for MEC research

    Resource Allocation Framework in Fog Computing for the Internet of Things Environments

    Get PDF
    Fog computing plays a pivotal role in the Internet of Things (IoT) ecosystem because of its ability to support delay-sensitive tasks, bringing resources from cloud servers closer to the “ground” and support IoT devices that are resource-constrained. Although fog computing offers some benefits such as quick response to requests, geo-distributed data processing and data processing in the proximity of the IoT devices, the exponential increase of IoT devices and large volumes of data being generated has led to a new set of challenges. One such problem is the allocation of resources to IoT tasks to match their computational needs and quality of service (QoS) requirements, whilst meeting both task deadlines and user expectations. Most proposed solutions in existing works suggest task offloading mechanisms where IoT devices would offload their tasks randomly to the fog layer or cloud layer. This helps in minimizing the communication delay; however, most tasks would end up missing their deadlines as many delays are experienced during offloading. This study proposes and introduces a Resource Allocation Scheduler (RAS) at the IoT-Fog gateway, whose goal is to decide where and when a task is to be offloaded, either to the fog layer, or the cloud layer based on their priority needs, computational needs and QoS requirements. The aim directly places work within the communication networks domain, in the transport layer of the Open Systems Interconnection (OSI) model. As such, this study follows the four phases of the top-down approach because of its reusability characteristics. To validate and test the efficiency and effectiveness of the RAS, the fog framework was implemented and evaluated in a simulated smart home setup. The essential metrics that were used to check if round-trip time was minimized are the queuing time, offloading time and throughput for QoS. The results showed that the RAS helps to reduce the round-trip time, increases throughput and leads to improved QoS. Furthermore, the approach addressed the starvation problem, a phenomenon that tends to affect low priority tasks. Most importantly, the results provides evidence that if resource allocation and assignment are appropriately done, round-trip time can be reduced and QoS can be improved in fog computing. The significant contribution of this research is the novel framework which minimizes round-trip time, addresses the starvation problem and improves QoS. Moreover, a literature reviewed paper which was regarded by reviewers as the first, as far as QoS in fog computing is concerned was produced

    Resource Allocation Framework in Fog Computing for the Internet of Things Environments

    Get PDF
    Fog computing plays a pivotal role in the Internet of Things (IoT) ecosystem because of its ability to support delay-sensitive tasks, bringing resources from cloud servers closer to the “ground” and support IoT devices that are resource-constrained. Although fog computing offers some benefits such as quick response to requests, geo-distributed data processing and data processing in the proximity of the IoT devices, the exponential increase of IoT devices and large volumes of data being generated has led to a new set of challenges. One such problem is the allocation of resources to IoT tasks to match their computational needs and quality of service (QoS) requirements, whilst meeting both task deadlines and user expectations. Most proposed solutions in existing works suggest task offloading mechanisms where IoT devices would offload their tasks randomly to the fog layer or cloud layer. This helps in minimizing the communication delay; however, most tasks would end up missing their deadlines as many delays are experienced during offloading. This study proposes and introduces a Resource Allocation Scheduler (RAS) at the IoT-Fog gateway, whose goal is to decide where and when a task is to be offloaded, either to the fog layer, or the cloud layer based on their priority needs, computational needs and QoS requirements. The aim directly places work within the communication networks domain, in the transport layer of the Open Systems Interconnection (OSI) model. As such, this study follows the four phases of the top-down approach because of its reusability characteristics. To validate and test the efficiency and effectiveness of the RAS, the fog framework was implemented and evaluated in a simulated smart home setup. The essential metrics that were used to check if round-trip time was minimized are the queuing time, offloading time and throughput for QoS. The results showed that the RAS helps to reduce the round-trip time, increases throughput and leads to improved QoS. Furthermore, the approach addressed the starvation problem, a phenomenon that tends to affect low priority tasks. Most importantly, the results provides evidence that if resource allocation and assignment are appropriately done, round-trip time can be reduced and QoS can be improved in fog computing. The significant contribution of this research is the novel framework which minimizes round-trip time, addresses the starvation problem and improves QoS. Moreover, a literature reviewed paper which was regarded by reviewers as the first, as far as QoS in fog computing is concerned was produced

    Computation Offloading and Task Scheduling on Network Edge

    Get PDF
    The Fifth-Generation (5G) networks facilitate the evolution of communication systems and accelerate a revolution in the Information Technology (IT) field. In the 5G era, wireless networks are anticipated to provide connectivity for billions of Mobile User Devices (MUDs) around the world and to support a variety of innovative use cases, such as autonomous driving, ubiquitous Internet of Things (IoT), and Internet of Vehicles (IoV). The novel use cases, however, usually incorporate compute-intensive applications, which generate enormous computing service demands with diverse and stringent service requirements. In particular, autonomous driving calls for prompt data processing for the safety-related applications, IoT nodes deployed in remote areas need energy-efficient computing given limited on-board energy, and vehicles require low-latency computing for IoV applications in a highly dynamic network. To support the emerging computing service demands, Mobile Edge Computing (MEC), as a cutting-edge technology in 5G, utilizes computing resources on network edge to provide computing services for MUDs within a radio access network. The primary benefits of MEC can be elaborated from two perspectives. From the perspective of MUDs, MEC enables low-latency and energy-efficient computing by allowing MUDs to offload their computation tasks to proximal edge servers, which are installed in access points such as cellular base stations, Road-Side Units (RSUs), and Unmanned Aerial Vehicles (UAVs). On the other hand, from the perspective of network operators, MEC allows a large amount of computing data to be processed on network edge, thereby alleviating backhaul congestion. {MEC is a promising technology to support computing demands for the novel 5G applications within the RAN. The interesting issue is to maximize the computation capability of network edge to meet the diverse service requirements arising from the applications in dynamic network environments. However, the main technical challenges are: 1) how an edge server schedules its limited computing resources to optimize the Quality-of-Experience (QoE) in autonomous driving; 2) how the computation loads are balanced between the edge server and IoT nodes in computation loads to enable energy-efficient computing service provisioning; and 3) how multiple edge servers coordinate their computing resources to enable seamless and reliable computing services for high-mobility vehicles in IoV. In this thesis, we develop efficient computing resource management strategies for MEC, including computation offloading and task scheduling, to address the above three technical challenges. First, we study computation task scheduling to support real-time applications, such as localization and obstacle avoidance, for autonomous driving. In our considered scenario, autonomous vehicles periodically sense the environment, offload sensor data to an edge server for processing, and receive computing results from the edge server. Due to mobility and computing latency, a vehicle travels a certain distance between the instant of offloading its sensor data and the instant of receiving the computing result. Our objective is to design a scheduling scheme for the edge server to minimize the above traveled distance of vehicles. The idea is to determine the processing order according to the individual vehicle mobility and computation capability of the edge server. We formulate a Restless Multi-Armed Bandit (RMAB) problem, design a Whittle index-based stochastic scheduling scheme, and determine the index using a Deep Reinforcement Learning (DRL) method. The proposed scheduling scheme can avoid the time-consuming policy exploration common in DRL scheduling approaches and makes effectual decisions with low complexity. Extensive simulation results demonstrate that, with the proposed index-based scheme, the edge server can deliver computing results to the vehicles promptly while adapting to time-variant vehicle mobility. Second, we study energy-efficient computation offloading and task scheduling for an edge server while provisioning computing services {for IoT nodes in remote areas}. In the considered scenario, a UAV is equipped with computing resources and plays the role of an aerial edge server to collect and process the computation tasks offloaded by ground MUDs. Given the service requirements of MUDs, we aim to maximize UAV energy efficiency by jointly optimizing the UAV trajectory, the user transmit power, and computation task scheduling. The resulting optimization problem corresponds to nonconvex fractional programming, and the Dinkelbach algorithm and the Successive Convex Approximation (SCA) technique are adopted to solve it. Furthermore, we decompose the problem into multiple subproblems for distributed and parallel problem solving. To cope with the case when the knowledge of user mobility is limited, we apply a spatial distribution estimation technique to predict the location of ground users so that the proposed approach can still be valid. Simulation results demonstrate the effectiveness of the proposed approach to maximize the energy efficiency of the UAV. Third, we study collaboration among multiple edge servers in computation offloading and task scheduling to support computing services {in IoV}. In the considered scenario, vehicles traverse the coverage of edge servers and offload their tasks to their proximal edge servers. We develop a collaborative edge computing framework to reduce computing service latency and alleviate computing service interruption due to the high mobility of vehicles: 1) a Task Partition and Scheduling Algorithm (TPSA) is proposed to schedule the execution order of the tasks offloaded to the edge servers given a computation offloading strategy; and 2) an artificial intelligence-based collaborative computing approach is developed to determine the task offloading, computing, and result delivery policy for vehicles. Specifically, the offloading and computing problem is formulated as a Markov decision process. A DRL technique, i.e., deep deterministic policy gradient, is adopted to find the optimal solution in a complex urban transportation network. With the developed framework, the service cost, which includes computing service latency and service failure penalty, can be minimized via the optimal computation task scheduling and edge server selection. Simulation results show that the proposed AI-based collaborative computing approach can adapt to a highly dynamic environment with outstanding performance. In summary, we investigate computing resource management to optimize QoE of MUDs in the coverage of an edge server, to improve energy efficiency for an aerial edge server while provisioning computing services, and to coordinate computing resources among edge servers for supporting MUDs with high mobility. The proposed approaches and theoretical results contribute to computing resource management for MEC in 5G and beyond

    Intelligence artificielle à la périphérie du réseau mobile avec efficacité de communication

    Get PDF
    L'intelligence artificielle (AI) et l'informatique à la périphérie du réseau (EC) ont permis de mettre en place diverses applications intelligentes incluant les maisons intelligentes, la fabrication intelligente, et les villes intelligentes. Ces progrès ont été alimentés principalement par la disponibilité d'un plus grand nombre de données, l'abondance de la puissance de calcul et les progrès de plusieurs techniques de compression. Toutefois, les principales avancées concernent le déploiement de modèles dans les dispositifs connectés. Ces modèles sont préalablement entraînés de manière centralisée. Cette prémisse exige que toutes les données générées par les dispositifs soient envoyées à un serveur centralisé, ce qui pose plusieurs problèmes de confidentialité et crée une surcharge de communication importante. Par conséquent, pour les derniers pas vers l'AI dans EC, il faut également propulser l'apprentissage des modèles ML à la périphérie du réseau. L'apprentissage fédéré (FL) est apparu comme une technique prometteuse pour l'apprentissage collaboratif de modèles ML sur des dispositifs connectés. Les dispositifs entraînent un modèle partagé sur leurs données stockées localement et ne partagent que les paramètres résultants avec une entité centralisée. Cependant, pour permettre l' utilisation de FL dans les réseaux périphériques sans fil, plusieurs défis hérités de l'AI et de EC doivent être relevés. En particulier, les défis liés à l'hétérogénéité statistique des données à travers les dispositifs ainsi que la rareté et l'hétérogénéité des ressources nécessitent une attention particulière. L'objectif de cette thèse est de proposer des moyens de relever ces défis et d'évaluer le potentiel de la FL dans de futures applications de villes intelligentes. Dans la première partie de cette thèse, l'accent est mis sur l'incorporation des propriétés des données dans la gestion de la participation des dispositifs dans FL et de l'allocation des ressources. Nous commençons par identifier les mesures de diversité des données qui peuvent être utilisées dans différentes applications. Ensuite, nous concevons un indicateur de diversité permettant de donner plus de priorité aux clients ayant des données plus informatives. Un algorithme itératif est ensuite proposé pour sélectionner conjointement les clients et allouer les ressources de communication. Cet algorithme accélère l'apprentissage et réduit le temps et l'énergie nécessaires. De plus, l'indicateur de diversité proposé est renforcé par un système de réputation pour éviter les clients malveillants, ce qui améliore sa robustesse contre les attaques par empoisonnement des données. Dans une deuxième partie de cette thèse, nous explorons les moyens de relever d'autres défis liés à la mobilité des clients et au changement de concept dans les distributions de données. De tels défis nécessitent de nouvelles mesures pour être traités. En conséquence, nous concevons un processus basé sur les clusters pour le FL dans les réseaux véhiculaires. Le processus proposé est basé sur la formation minutieuse de clusters pour contourner la congestion de la communication et est capable de traiter différents modèles en parallèle. Dans la dernière partie de cette thèse, nous démontrons le potentiel de FL dans un cas d'utilisation réel impliquant la prévision à court terme de la puissance électrique dans un réseau intelligent. Nous proposons une architecture permettant l'utilisation de FL pour encourager la collaboration entre les membres de la communauté et nous montrons son importance pour l'entraînement des modèles et la réduction du coût de communication à travers des résultats numériques.Abstract : Artificial intelligence (AI) and Edge computing (EC) have enabled various applications ranging from smart home, to intelligent manufacturing, and smart cities. This progress was fueled mainly by the availability of more data, abundance of computing power, and the progress of several compression techniques. However, the main advances are in relation to deploying cloud-trained machine learning (ML) models on edge devices. This premise requires that all data generated by end devices be sent to a centralized server, thus raising several privacy concerns and creating significant communication overhead. Accordingly, paving the last mile of AI on EC requires pushing the training of ML models to the edge of the network. Federated learning (FL) has emerged as a promising technique for the collaborative training of ML models on edge devices. The devices train a globally shared model on their locally stored data and only share the resulting parameters with a centralized entity. However, to enable FL in wireless edge networks, several challenges inherited from both AI and EC need to be addressed. In particular, challenges related to the statistical heterogeneity of the data across the devices alongside the scarcity and the heterogeneity of the resources require particular attention. The goal of this thesis is to propose ways to address these challenges and to evaluate the potential of FL in future applications. In the first part of this thesis, the focus is on incorporating the data properties of FL in handling the participation and resource allocation of devices in FL. We start by identifying data diversity measures allowing us to evaluate the richness of local datasets in different applications. Then, we design a diversity indicator allowing us to give more priority to clients with more informative data. An iterative algorithm is then proposed to jointly select clients and allocate communication resources. This algorithm accelerates the training and reduces the overall needed time and energy. Furthermore, the proposed diversity indicator is reinforced with a reputation system to avoid malicious clients, thus enhancing its robustness against poisoning attacks. In the second part of this thesis, we explore ways to tackle other challenges related to the mobility of the clients and concept-shift in data distributions. Such challenges require new measures to be handled. Accordingly, we design a cluster-based process for FL for the particular case of vehicular networks. The proposed process is based on careful clusterformation to bypass the communication bottleneck and is able to handle different models in parallel. In the last part of this thesis, we demonstrate the potential of FL in a real use-case involving short-term forecasting of electrical power in smart grid. We propose an architecture empowered with FL to encourage the collaboration among community members and show its importance for both training and judicious use of communication resources through numerical results

    Optimization and Communication in UAV Networks

    Get PDF
    UAVs are becoming a reality and attract increasing attention. They can be remotely controlled or completely autonomous and be used alone or as a fleet and in a large set of applications. They are constrained by hardware since they cannot be too heavy and rely on batteries. Their use still raises a large set of exciting new challenges in terms of trajectory optimization and positioning when they are used alone or in cooperation, and communication when they evolve in swarm, to name but a few examples. This book presents some new original contributions regarding UAV or UAV swarm optimization and communication aspects

    Edge Computing for Internet of Things

    Get PDF
    The Internet-of-Things is becoming an established technology, with devices being deployed in homes, workplaces, and public areas at an increasingly rapid rate. IoT devices are the core technology of smart-homes, smart-cities, intelligent transport systems, and promise to optimise travel, reduce energy usage and improve quality of life. With the IoT prevalence, the problem of how to manage the vast volumes of data, wide variety and type of data generated, and erratic generation patterns is becoming increasingly clear and challenging. This Special Issue focuses on solving this problem through the use of edge computing. Edge computing offers a solution to managing IoT data through the processing of IoT data close to the location where the data is being generated. Edge computing allows computation to be performed locally, thus reducing the volume of data that needs to be transmitted to remote data centres and Cloud storage. It also allows decisions to be made locally without having to wait for Cloud servers to respond

    Edge Learning for 6G-enabled Internet of Things: A Comprehensive Survey of Vulnerabilities, Datasets, and Defenses

    Full text link
    The ongoing deployment of the fifth generation (5G) wireless networks constantly reveals limitations concerning its original concept as a key driver of Internet of Everything (IoE) applications. These 5G challenges are behind worldwide efforts to enable future networks, such as sixth generation (6G) networks, to efficiently support sophisticated applications ranging from autonomous driving capabilities to the Metaverse. Edge learning is a new and powerful approach to training models across distributed clients while protecting the privacy of their data. This approach is expected to be embedded within future network infrastructures, including 6G, to solve challenging problems such as resource management and behavior prediction. This survey article provides a holistic review of the most recent research focused on edge learning vulnerabilities and defenses for 6G-enabled IoT. We summarize the existing surveys on machine learning for 6G IoT security and machine learning-associated threats in three different learning modes: centralized, federated, and distributed. Then, we provide an overview of enabling emerging technologies for 6G IoT intelligence. Moreover, we provide a holistic survey of existing research on attacks against machine learning and classify threat models into eight categories, including backdoor attacks, adversarial examples, combined attacks, poisoning attacks, Sybil attacks, byzantine attacks, inference attacks, and dropping attacks. In addition, we provide a comprehensive and detailed taxonomy and a side-by-side comparison of the state-of-the-art defense methods against edge learning vulnerabilities. Finally, as new attacks and defense technologies are realized, new research and future overall prospects for 6G-enabled IoT are discussed

    Intelligent Circuits and Systems

    Get PDF
    ICICS-2020 is the third conference initiated by the School of Electronics and Electrical Engineering at Lovely Professional University that explored recent innovations of researchers working for the development of smart and green technologies in the fields of Energy, Electronics, Communications, Computers, and Control. ICICS provides innovators to identify new opportunities for the social and economic benefits of society.  This conference bridges the gap between academics and R&D institutions, social visionaries, and experts from all strata of society to present their ongoing research activities and foster research relations between them. It provides opportunities for the exchange of new ideas, applications, and experiences in the field of smart technologies and finding global partners for future collaboration. The ICICS-2020 was conducted in two broad categories, Intelligent Circuits & Intelligent Systems and Emerging Technologies in Electrical Engineering
    corecore