255 research outputs found

    A survey of multi-access edge computing in 5G and beyond : fundamentals, technology integration, and state-of-the-art

    Get PDF
    Driven by the emergence of new compute-intensive applications and the vision of the Internet of Things (IoT), it is foreseen that the emerging 5G network will face an unprecedented increase in traffic volume and computation demands. However, end users mostly have limited storage capacities and finite processing capabilities, thus how to run compute-intensive applications on resource-constrained users has recently become a natural concern. Mobile edge computing (MEC), a key technology in the emerging fifth generation (5G) network, can optimize mobile resources by hosting compute-intensive applications, process large data before sending to the cloud, provide the cloud-computing capabilities within the radio access network (RAN) in close proximity to mobile users, and offer context-aware services with the help of RAN information. Therefore, MEC enables a wide variety of applications, where the real-time response is strictly required, e.g., driverless vehicles, augmented reality, robotics, and immerse media. Indeed, the paradigm shift from 4G to 5G could become a reality with the advent of new technological concepts. The successful realization of MEC in the 5G network is still in its infancy and demands for constant efforts from both academic and industry communities. In this survey, we first provide a holistic overview of MEC technology and its potential use cases and applications. Then, we outline up-to-date researches on the integration of MEC with the new technologies that will be deployed in 5G and beyond. We also summarize testbeds and experimental evaluations, and open source activities, for edge computing. We further summarize lessons learned from state-of-the-art research works as well as discuss challenges and potential future directions for MEC research

    Efficient and Secure Resource Allocation in Mobile Edge Computing Enabled Wireless Networks

    Get PDF
    To support emerging applications such as autonomous vehicles and smart homes and to build an intelligent society, the next-generation internet of things (IoT) is calling for up to 50 billion devices connected world wide. Massive devices connection, explosive data circulation, and colossal data processing demand are driving both the industry and academia to explore new solutions. Uploading this vast amount of data to the cloud center for processing will significantly increase the load on backbone networks and cause relatively long latency to time-sensitive applications. A practical solution is to deploy the computing resource closer to end-users to process the distributed data. Hence, Mobile Edge Computing (MEC) emerged as a promising solution to providing high-speed data processing service with low latency. However, the implementation of MEC networks is handicapped by various challenges. For one thing, to serve massive IoT devices, dense deployment of edge servers will consume much more energy. For another, uploading sensitive user data through a wireless link intro-duces potential risks, especially for those size-limited IoT devices that cannot implement complicated encryption techniques. This dissertation investigates problems related to Energy Efficiency (EE) and Physical Layer Security (PLS) in MEC-enabled IoT networks and how Non-Orthogonal Multiple Access (NOMA), prediction-based server coordination, and Intelligent Reflecting Surface (IRS) can be used to mitigate them. Employing a new spectrum access method can help achieve greater speed with less power consumption, therefore increasing system EE. We first investigated NOMA-assisted MEC networks and verified that the EE performance could be significantly improved. Idle servers can consume unnecessary power. Proactive server coordination can help relieve the tension of increased energy consumption in MEC systems. Our next step was to employ advanced machine learning algorithms to predict data workload at the server end and adaptively adjust the system configuration over time, thus reducing the accumulated system cost. We then introduced the PLS to our system and investigated the long-term secure EE performance of the MEC-enabled IoT network with NOMA assistance. It has shown that NOMA can improve both EE and PLS for the network. Finally, we switch from the single antenna scenario to a multiple-input single-output (MISO) system to exploit space diversity and beam forming techniques in mmWave communication. IRS can be used simultaneously to help relieve the pathloss and reconfigure multi-path links. In the final part, we first investigated the secure EE performance of IRS-assisted MISO networks and introduced a friendly jammer to block the eavesdroppers and improve the PLS rate. We then combined the IRS with the NOMA in the MEC network and showed that the IRS can further enhance the system EE

    Task-Oriented Delay-Aware Multi-Tier Computing in Cell-free Massive MIMO Systems

    Get PDF
    Multi-tier computing can enhance the task computation by multi-tier computing nodes. In this paper, we propose a cell-free massive multiple-input multiple-output (MIMO) aided computing system by deploying multi-tier computing nodes to improve the computation performance. At first, we investigate the computational latency and the total energy consumption for task computation, regarded as total cost. Then, we formulate a total cost minimization problem to design the bandwidth allocation and task allocation, while considering realistic heterogenous delay requirements of the computational tasks. Due to the binary task allocation variable, the formulated optimization problem is non-convex. Therefore, we solve the bandwidth allocation and task allocation problem by decoupling the original optimization problem into bandwidth allocation and task allocation subproblems. As the bandwidth allocation problem is a convex optimization problem, we first determine the bandwidth allocation for given task allocation strategy, followed by conceiving the traditional convex optimization strategy to obtain the bandwidth allocation solution. Based on the asymptotic property of received signal-to-interference-plus-noise ratio (SINR) under the cell-free massive MIMO setting and bandwidth allocation solution, we formulate a dual problem to solve the task allocation subproblem by relaxing the binary constraint with Lagrange partial relaxation for heterogenous task delay requirements. At last, simulation results are provided to demonstrate that our proposed task offloading scheme performs better than the benchmark schemes, where the minimum-cost optimal offloading strategy for heterogeneous delay requirements of the computational tasks may be controlled by the asymptotic property of the received SINR in our proposed cell-free massive MIMO-aided multi-tier computing systems.This work was supported by the National Key Project under Grant 2020YFB1807700

    Dynamic NOMA-Based Computation Offloading in Vehicular Platoons

    Full text link
    Both the mobile edge computing (MEC) based and fog computing (FC) aided Internet of Vehicles (IoV) constitute promising paradigms of meeting the demands of low-latency pervasive computing. To this end, we construct a dynamic NOMA-based computation offloading scheme for vehicular platoons on highways, where the vehicles can offload their computing tasks to other platoon members. To cope with the rapidly fluctuating channel quality, we divide the timeline into successive time slots according to the channel's coherence time. Robust computing and offloading decisions are made for each time slot after taking the channel estimation errors into account. Considering a certain time slot, we first analytically characterize both the locally computed source data and the offloaded source data as well as the energy consumption of every vehicle in the platoons. We then formulate the problem of minimizing the long-term energy consumption by optimizing the allocation of both the communication and computing resources. To solve the problem formulated, we design an online algorithm based on the classic Lyapunov optimization method and block successive upper bound minimization (BSUM) method. Finally, the numerical simulation results characterize the performance of our algorithm and demonstrate its advantages both over the local computing scheme and the orthogonal multiple access (OMA)-based offloading scheme.Comment: 11 pages, 9 figure
    • …
    corecore