21 research outputs found

    An Adaptive Task Scheduling in Fog Computing

    Get PDF
    Internet applications generate massive amount of data. For processing the data, it is transmitted to cloud. Time-sensitive applications require faster access. However, the limitation with the cloud is the connectivity with the end devices. Fog was developed by Cisco to overcome this limitation. Fog has better connectivity with the end devices, with some limitations. Fog works as intermediate layer between the end devices and the cloud. When providing the quality of service to end users, scheduling plays an important role. Scheduling a task based on the end users requirement is a tedious thing. In this paper, we proposed a cloud-fog task scheduling model, which provides quality of service to end devices with proper security

    Computation Offloading and Task Scheduling on Network Edge

    Get PDF
    The Fifth-Generation (5G) networks facilitate the evolution of communication systems and accelerate a revolution in the Information Technology (IT) field. In the 5G era, wireless networks are anticipated to provide connectivity for billions of Mobile User Devices (MUDs) around the world and to support a variety of innovative use cases, such as autonomous driving, ubiquitous Internet of Things (IoT), and Internet of Vehicles (IoV). The novel use cases, however, usually incorporate compute-intensive applications, which generate enormous computing service demands with diverse and stringent service requirements. In particular, autonomous driving calls for prompt data processing for the safety-related applications, IoT nodes deployed in remote areas need energy-efficient computing given limited on-board energy, and vehicles require low-latency computing for IoV applications in a highly dynamic network. To support the emerging computing service demands, Mobile Edge Computing (MEC), as a cutting-edge technology in 5G, utilizes computing resources on network edge to provide computing services for MUDs within a radio access network. The primary benefits of MEC can be elaborated from two perspectives. From the perspective of MUDs, MEC enables low-latency and energy-efficient computing by allowing MUDs to offload their computation tasks to proximal edge servers, which are installed in access points such as cellular base stations, Road-Side Units (RSUs), and Unmanned Aerial Vehicles (UAVs). On the other hand, from the perspective of network operators, MEC allows a large amount of computing data to be processed on network edge, thereby alleviating backhaul congestion. {MEC is a promising technology to support computing demands for the novel 5G applications within the RAN. The interesting issue is to maximize the computation capability of network edge to meet the diverse service requirements arising from the applications in dynamic network environments. However, the main technical challenges are: 1) how an edge server schedules its limited computing resources to optimize the Quality-of-Experience (QoE) in autonomous driving; 2) how the computation loads are balanced between the edge server and IoT nodes in computation loads to enable energy-efficient computing service provisioning; and 3) how multiple edge servers coordinate their computing resources to enable seamless and reliable computing services for high-mobility vehicles in IoV. In this thesis, we develop efficient computing resource management strategies for MEC, including computation offloading and task scheduling, to address the above three technical challenges. First, we study computation task scheduling to support real-time applications, such as localization and obstacle avoidance, for autonomous driving. In our considered scenario, autonomous vehicles periodically sense the environment, offload sensor data to an edge server for processing, and receive computing results from the edge server. Due to mobility and computing latency, a vehicle travels a certain distance between the instant of offloading its sensor data and the instant of receiving the computing result. Our objective is to design a scheduling scheme for the edge server to minimize the above traveled distance of vehicles. The idea is to determine the processing order according to the individual vehicle mobility and computation capability of the edge server. We formulate a Restless Multi-Armed Bandit (RMAB) problem, design a Whittle index-based stochastic scheduling scheme, and determine the index using a Deep Reinforcement Learning (DRL) method. The proposed scheduling scheme can avoid the time-consuming policy exploration common in DRL scheduling approaches and makes effectual decisions with low complexity. Extensive simulation results demonstrate that, with the proposed index-based scheme, the edge server can deliver computing results to the vehicles promptly while adapting to time-variant vehicle mobility. Second, we study energy-efficient computation offloading and task scheduling for an edge server while provisioning computing services {for IoT nodes in remote areas}. In the considered scenario, a UAV is equipped with computing resources and plays the role of an aerial edge server to collect and process the computation tasks offloaded by ground MUDs. Given the service requirements of MUDs, we aim to maximize UAV energy efficiency by jointly optimizing the UAV trajectory, the user transmit power, and computation task scheduling. The resulting optimization problem corresponds to nonconvex fractional programming, and the Dinkelbach algorithm and the Successive Convex Approximation (SCA) technique are adopted to solve it. Furthermore, we decompose the problem into multiple subproblems for distributed and parallel problem solving. To cope with the case when the knowledge of user mobility is limited, we apply a spatial distribution estimation technique to predict the location of ground users so that the proposed approach can still be valid. Simulation results demonstrate the effectiveness of the proposed approach to maximize the energy efficiency of the UAV. Third, we study collaboration among multiple edge servers in computation offloading and task scheduling to support computing services {in IoV}. In the considered scenario, vehicles traverse the coverage of edge servers and offload their tasks to their proximal edge servers. We develop a collaborative edge computing framework to reduce computing service latency and alleviate computing service interruption due to the high mobility of vehicles: 1) a Task Partition and Scheduling Algorithm (TPSA) is proposed to schedule the execution order of the tasks offloaded to the edge servers given a computation offloading strategy; and 2) an artificial intelligence-based collaborative computing approach is developed to determine the task offloading, computing, and result delivery policy for vehicles. Specifically, the offloading and computing problem is formulated as a Markov decision process. A DRL technique, i.e., deep deterministic policy gradient, is adopted to find the optimal solution in a complex urban transportation network. With the developed framework, the service cost, which includes computing service latency and service failure penalty, can be minimized via the optimal computation task scheduling and edge server selection. Simulation results show that the proposed AI-based collaborative computing approach can adapt to a highly dynamic environment with outstanding performance. In summary, we investigate computing resource management to optimize QoE of MUDs in the coverage of an edge server, to improve energy efficiency for an aerial edge server while provisioning computing services, and to coordinate computing resources among edge servers for supporting MUDs with high mobility. The proposed approaches and theoretical results contribute to computing resource management for MEC in 5G and beyond

    Federated Learning in Intelligent Transportation Systems: Recent Applications and Open Problems

    Full text link
    Intelligent transportation systems (ITSs) have been fueled by the rapid development of communication technologies, sensor technologies, and the Internet of Things (IoT). Nonetheless, due to the dynamic characteristics of the vehicle networks, it is rather challenging to make timely and accurate decisions of vehicle behaviors. Moreover, in the presence of mobile wireless communications, the privacy and security of vehicle information are at constant risk. In this context, a new paradigm is urgently needed for various applications in dynamic vehicle environments. As a distributed machine learning technology, federated learning (FL) has received extensive attention due to its outstanding privacy protection properties and easy scalability. We conduct a comprehensive survey of the latest developments in FL for ITS. Specifically, we initially research the prevalent challenges in ITS and elucidate the motivations for applying FL from various perspectives. Subsequently, we review existing deployments of FL in ITS across various scenarios, and discuss specific potential issues in object recognition, traffic management, and service providing scenarios. Furthermore, we conduct a further analysis of the new challenges introduced by FL deployment and the inherent limitations that FL alone cannot fully address, including uneven data distribution, limited storage and computing power, and potential privacy and security concerns. We then examine the existing collaborative technologies that can help mitigate these challenges. Lastly, we discuss the open challenges that remain to be addressed in applying FL in ITS and propose several future research directions

    Deep Neural Networks meet computation offloading in mobile edge networks: Applications, taxonomy, and open issues

    Get PDF
    Mobile Edge Computing (MEC) is a modern paradigm that involves moving computing and storage resources closer to the network edge, reducing latency, and enabling innovative, delay-sensitive applications. Within MEC, computation offloading refers to the process of transferring computationally intensive tasks or processes from mobile devices to edge servers, optimizing the performance of mobile applications. Traditional numerical optimization methods for computation offloading often necessitate numerous iterations to attain optimal solutions. In this paper, we provide a tutorial on how Deep Neural Networks (DNNs) resolve the challenges of computation offloading. The article explores various applications of DNNs in computation offloading, encompassing channel estimation, caching, AR and VR applications, resource allocation, mode selection, unmanned aerial vehicles (UAVs), and vehicle management. We present a comprehensive taxonomy that categorizes these applications, and offer an overview of existing schemes, comparing their effectiveness. Additionally, we outline the open research issues that can be addressed through the application of DNNs in MEC offloading. We also highlight specific challenges related to DNN utilization in computation offloading. In conclusion, we affirm that DNNs are widely acknowledged as invaluable tools for optimizing computation offloading in MEC

    Resource sharing in vehicular cloud

    Get PDF
    Au cours des dernières années, on a observé l'intérêt croissant envers l'accessibilité à l'information et, en particulier, envers des approches innovantes utilisant les services à distance accessibles depuis les appareils mobiles à travers le monde. Parallèlement, la communication des véhicules, utilisant des capteurs embarqués et des dispositifs de communication sans fil, a été introduite pour améliorer la sécurité routière et l'expérience de conduite à travers ce qui est communément appelé réseaux véhiculaires (VANET). L'accès sans fil à l’Internet à partir des véhicules a déclenché l'émergence de nouveaux services pouvant être disponibles à partir ceux-ci. Par ailleurs, une extension du paradigme des réseaux véhiculaires a été récemment promue à un autre niveau. Le nuage véhiculaire (Vehicular Cloud) (VC) est la convergence ultime entre le concept de l’infonuagique (cloud computing) et les réseaux véhiculaires dans le but de l’approvisionnement et la gestion des services. Avec cette approche, les véhicules peuvent être connectés au nuage, où une multitude de services sont disponibles, ou ils peuvent aussi être des fournisseurs de services. Cela est possible en raison de la variété des ressources disponibles dans les véhicules: informatique, bande passante, stockage et capteurs. Dans cette thèse, on propose des méthodes innovantes et efficaces pour permettre la délivrance de services par des véhicules dans le VC. Plusieurs schémas, notamment la formation de grappes ou nuages de véhicules, la planification de transmission, l'annulation des interférences et l'affectation des fréquences à l'aide de réseaux définis par logiciel (SDN), ont été développés et leurs performances ont été analysées. Les schémas de formation de grappes proposés sont DHCV (un algorithme de clustering D-hop distribué pour VANET) et DCEV (une formation de grappes distribuée pour VANET basée sur la mobilité relative de bout en bout). Ces schémas de regroupement sont utilisés pour former dynamiquement des nuages de véhicules. Les systèmes regroupent les véhicules dans des nuages qui ne se chevauchent pas et qui ont des tailles adaptées à leurs mobilités. Les VC sont créés de telle sorte que chaque véhicule soit au plus D sauts plus loin d'un coordonnateur de nuage. La planification de transmission proposée implémente un contrôle d'accès moyen basé sur la contention où les conditions physiques du canal sont entièrement analysées. Le système d'annulation d'interférence permet d'éliminer les interférences les plus importantes; cela améliore les performances de planification d’utilisation de la bande passante et le partage des ressources dans les nuages construits. Enfin, on a proposé une solution à l'aide de réseaux définis par logiciel, SDN, où différentes bandes de fréquences sont affectées aux différentes liens de transmission de chaque VC afin d’améliorer les performances du réseau.Abstract : In recent years, we have observed a growing interest in information accessibility and especially innovative approaches for making distant services accessible from mobile devices across the world. In tandem with this growth of interest, there was the introduction of vehicular communication, also known as vehicular ad hoc networks (VANET), leveraging onboard sensors and wireless communication devices to enhance road safety and driving experience. Vehicles wireless accessibility to the internet has triggered the emergence of service packages that can be available to or from vehicles. Recently, an extension of the vehicular networks paradigm has been promoted to a new level. Vehicular cloud (VC) is the ultimate convergence between the cloud computing concept and vehicular networks for the purpose of service provisioning and management. Vehicles can get connected to the cloud, where a multitude of services are available to them. Also vehicles can offer services and act as service providers rather than service consumers. This is possible because of the variety of resources available in vehicles: computing, bandwidth, storage and sensors. In this thesis, we propose novel and efficient methods to enable vehicle service delivery in VC. Several schemes including cluster/cloud formation, transmission scheduling, interference cancellation, and frequency assignment using software defined networking (SDN) have been developed and their performances have been analysed. The proposed cluster formation schemes are DHCV (a distributed D-hop clustering algorithm for VANET) and DCEV (a distributed cluster formation for VANET based on end-to-end relative mobility). These clustering schemes are used to dynamically form vehicle clouds. The schemes group vehicles into non-overlapping clouds, which have adaptive sizes according to their mobility. VCs are created in such a way that each vehicle is at most D-hops away from a cloud coordinator. The proposed transmission scheduling implements a contention-free-based medium access control where physical conditions of the channel are fully analyzed. The interference cancellation scheme makes it possible to remove the strongest interferences; this improves the scheduling performance and resource sharing inside the constructed clouds. Finally, we proposed an SDN based vehicular cloud solution where different frequency bands are assigned to different transmission links to improve the network performance

    Enabling AI in Future Wireless Networks: A Data Life Cycle Perspective

    Full text link
    Recent years have seen rapid deployment of mobile computing and Internet of Things (IoT) networks, which can be mostly attributed to the increasing communication and sensing capabilities of wireless systems. Big data analysis, pervasive computing, and eventually artificial intelligence (AI) are envisaged to be deployed on top of the IoT and create a new world featured by data-driven AI. In this context, a novel paradigm of merging AI and wireless communications, called Wireless AI that pushes AI frontiers to the network edge, is widely regarded as a key enabler for future intelligent network evolution. To this end, we present a comprehensive survey of the latest studies in wireless AI from the data-driven perspective. Specifically, we first propose a novel Wireless AI architecture that covers five key data-driven AI themes in wireless networks, including Sensing AI, Network Device AI, Access AI, User Device AI and Data-provenance AI. Then, for each data-driven AI theme, we present an overview on the use of AI approaches to solve the emerging data-related problems and show how AI can empower wireless network functionalities. Particularly, compared to the other related survey papers, we provide an in-depth discussion on the Wireless AI applications in various data-driven domains wherein AI proves extremely useful for wireless network design and optimization. Finally, research challenges and future visions are also discussed to spur further research in this promising area.Comment: Accepted at the IEEE Communications Surveys & Tutorials, 42 page
    corecore