37 research outputs found

    A survey of multi-access edge computing in 5G and beyond : fundamentals, technology integration, and state-of-the-art

    Get PDF
    Driven by the emergence of new compute-intensive applications and the vision of the Internet of Things (IoT), it is foreseen that the emerging 5G network will face an unprecedented increase in traffic volume and computation demands. However, end users mostly have limited storage capacities and finite processing capabilities, thus how to run compute-intensive applications on resource-constrained users has recently become a natural concern. Mobile edge computing (MEC), a key technology in the emerging fifth generation (5G) network, can optimize mobile resources by hosting compute-intensive applications, process large data before sending to the cloud, provide the cloud-computing capabilities within the radio access network (RAN) in close proximity to mobile users, and offer context-aware services with the help of RAN information. Therefore, MEC enables a wide variety of applications, where the real-time response is strictly required, e.g., driverless vehicles, augmented reality, robotics, and immerse media. Indeed, the paradigm shift from 4G to 5G could become a reality with the advent of new technological concepts. The successful realization of MEC in the 5G network is still in its infancy and demands for constant efforts from both academic and industry communities. In this survey, we first provide a holistic overview of MEC technology and its potential use cases and applications. Then, we outline up-to-date researches on the integration of MEC with the new technologies that will be deployed in 5G and beyond. We also summarize testbeds and experimental evaluations, and open source activities, for edge computing. We further summarize lessons learned from state-of-the-art research works as well as discuss challenges and potential future directions for MEC research

    A computational offloading optimization scheme based on deep reinforcement learning in perceptual network

    Get PDF
    Currently, the deep integration of the Internet of Things (IoT) and edge computing has improved the computing capability of the IoT perception layer. Existing offloading techniques for edge computing suffer from the single problem of solidifying offloading policies. Based on this, combined with the characteristics of deep reinforcement learning, this paper investigates a computation offloading optimization scheme for the perception layer. The algorithm can adaptively adjust the computational task offloading policy of IoT terminals according to the network changes in the perception layer. Experiments show that the algorithm effectively improves the operational efficiency of the IoT perceptual layer and reduces the average task delay compared with other offloading algorithms

    Energy-efficient non-orthogonal multiple access for wireless communication system

    Get PDF
    Non-orthogonal multiple access (NOMA) has been recognized as a potential solution for enhancing the throughput of next-generation wireless communications. NOMA is a potential option for 5G networks due to its superiority in providing better spectrum efficiency (SE) compared to orthogonal multiple access (OMA). From the perspective of green communication, energy efficiency (EE) has become a new performance indicator. A systematic literature review is conducted to investigate the available energy efficient approach researchers have employed in NOMA. We identified 19 subcategories related to EE in NOMA out of 108 publications where 92 publications are from the IEEE website. To help the reader comprehend, a summary for each category is explained and elaborated in detail. From the literature review, it had been observed that NOMA can enhance the EE of wireless communication systems. At the end of this survey, future research particularly in machine learning algorithms such as reinforcement learning (RL) and deep reinforcement learning (DRL) for NOMA are also discussed

    Backscatter-assisted data offloading in OFDMA-based wireless powered mobile edge computing for IoT networks

    Get PDF
    Mobile edge computing (MEC) has emerged as a prominent technology to overcome sudden demands on computation-intensive applications of the Internet of Things (IoT) with finite processing capabilities. Nevertheless, the limited energy resources also seriously hinders IoT devices from offloading tasks that consume high power in active RF communications. Despite the development of energy harvesting (EH) techniques, the harvested energy from surrounding environments could be inadequate for power-hungry tasks. Fortunately, Backscatter communications (Backcom) is an intriguing technology to narrow the gap between the power needed for communication and harvested power. Motivated by these considerations, this paper investigates a backscatter-assisted data offloading in OFDMA-based wireless-powered (WP) MEC for IoT systems. Specifically, we aim at maximizing the sum computation rate by jointly optimizing the transmit power at the gateway (GW), backscatter coefficient, time-splitting (TS) ratio, and binary decision-making matrices. This problem is challenging to solve due to its non-convexity. To find solutions, we first simplify the problem by determining the optimal values of transmit power of the GW and backscatter coefficient. Then, the original problem is decomposed into two sub-problems, namely, TS ratio optimization with given offloading decision matrices and offloading decision optimization with given TS ratio. Especially, a closedform expression for the TS ratio is obtained which greatly enhances the CPU execution time. Based on the solutions of the two sub-problems, an efficient algorithm, termed the fast-efficient algorithm (FEA), is proposed by leveraging the block coordinate descent method. Then, it is compared with exhaustive search (ES), bisection-based algorithm (BA), edge computing (EC), and local computing (LC) used as reference methods. As a result, the FEA is the best solution which results in a near-globally-optimal solution at a much lower complexity as compared to benchmark schemes. For instance, the CPU execution time of FEA is about 0.029 second in a 50-user network, which is tailored for ultralow latency applications of IoT networks

    Link Scheduling in UAV-Aided Networks

    Get PDF
    Unmanned Aerial Vehicles (UAVs) or drones are a type of low altitude aerial mobile vehicles. They can be integrated into existing networks; e.g., cellular, Internet of Things (IoT) and satellite networks. Moreover, they can leverage existing cellular or Wi-Fi infrastructures to communicate with one another. A popular application of UAVs is to deploy them as mobile base stations and/or relays to assist terrestrial wireless communications. Another application is data collection, whereby they act as mobile sinks for wireless sensor networks or sensor devices operating in IoT networks. Advantageously, UAVs are cost-effective and they are able to establish line-of-sight links, which help improve data rate. A key concern, however, is that the uplink communications to a UAV may be limited, where it is only able to receive from one device at a time. Further, ground devices, such as those in IoT networks, may have limited energy, which limit their transmit power. To this end, there are three promising approaches to address these concerns, including (i) trajectory optimization, (ii) link scheduling, and (iii) equipping UAVs with a Successive Interference Cancellation (SIC) radio. Henceforth, this thesis considers data collection in UAV-aided, TDMA and SICequipped wireless networks. Its main aim is to develop novel link schedulers to schedule uplink communications to a SIC-capable UAV. In particular, it considers two types of networks: (i) one-tier UAV communications networks, where a SIC-enabled rotary-wing UAV collects data from multiple ground devices, and (ii) Space-Air-Ground Integrated Networks (SAGINs), where a SIC-enabled rotary-wing UAV offloads collected data from ground devices to a swarm of CubeSats. A CubeSat then downloads its data to a terrestrial gateway. Compared to one-tier UAV communications networks, SAGINs are able to provide wide coverage and seamless connectivity to ground devices in remote and/or sparsely populated areas

    Communication, sensing, computing and energy harvesting in smart cities

    Get PDF
    A smart city provides diverse services based on real-time data obtained from different devices deployed in urban areas. These devices are largely battery-powered and widely placed. Therefore, providing continuous energy to these devices and ensuring their efficient sensing and communications are critical for the wide deployment of smart cities. To achieve frequent and effective data exchange, advanced enabling information and communication technology (ICT) infrastructure is in urgent demand. An ideal network in future smart cities should be capable of sensing the physical environment and intelligently mapping the digital world. Therefore, in this paper, we propose design guidelines on how to integrate communications with sensing, computing and/or energy harvesting in the context of smart cities, aiming to offer research insights on developing integrated communications, sensing, computing and energy harvesting (ICSCE) for promoting the development ICT infrastructure in smart cities. To put these four pillars of smart cities together and to take advantage of ever-increasing artificial intelligence (AI) technologies, the authors propose a promising AI-enabled ICSCE architecture by leveraging the digital twin network. The proposed architecture models the physical deep neural network-aided ICSCE system in a virtual space, where offline training is performed by using the collected real-time data from the environment and physical devices

    CoPace:Edge Computation Offloading and Caching for Self-Driving with Deep Reinforcement Learning

    Get PDF
    Currently, self-driving, emerging as a key automatic application, has brought a huge potential for the provision of in-vehicle services (e.g., automatic path planning) to mitigate urban traffic congestion and enhance travel safety. To provide high-quality vehicular services with stringent delay constraints, edge computing (EC) enables resource-hungry self-driving vehicles (SDVs) to offload computation-intensive tasks to the edge servers (ESs). In addition, caching highly reusable contents decreases the redundant transmission time and improves the quality of services (QoS) of SDVs, which is envisioned as a supplement to the computation offloading. However, the high mobility and time-varying requests of SDVs make it challenging to provide reliable offloading decisions while guaranteeing the resource utilization of content caching. To this end, in this paper we propose a \underline{co}llaborative com\underline{p}utation offlo\underline{a}ding and \underline{c}ont\underline{e}nt caching method, named CoPace, by leveraging deep reinforcement learning (DRL) in EC for self-driving system. Specifically, we resort to a deep learning model to predict the future time-varying content popularity, taking into account the temporal-spatial attributes of requests. Moreover, a DRL-based algorithm is developed to jointly optimize the offloading and caching decisions, as well as the resource allocation (i.e., computing and communication resources) strategies. Extensive experiments with real-world datasets in Shanghai, China, are conducted to evaluate the performance, which demonstrates that CoPace is both effective and well-performed
    corecore