5 research outputs found

    Energy Minimization for Mobile Edge Computing Networks with Time-Sensitive Constraints

    Full text link
    Mobile edge computing (MEC) provides users with a high quality experience (QoE) by placing servers with rich services close to the end users. Compared with local computing, MEC can contribute to energy saving, but results in increased communication latency. In this paper, we jointly optimize task offloading and resource allocation to minimize the energy consumption in an orthogonal frequency division multiple access (OFDMA)-based MEC networks, where the time-sensitive tasks can be processed at both local users and MEC server via partial offloading. Since the optimization variables of the problem are strongly coupled, we first decompose the original problem into two subproblems named as offloading selection (PO), and subcarriers and computing resource allocation (PS), and then propose an iterative algorithm to deal with them in a sequence. To be specific, we derive the closed-form solution for PO, and deal with PS by an alternating way in the dual domain due to its NP-hardness. Simulation results demonstrateComment: IEEE GLOBECOM 2020. arXiv admin note: substantial text overlap with arXiv:2003.1271

    A Parallel Optimal Task Allocation Mechanism for Large-Scale Mobile Edge Computing

    Full text link
    We consider the problem of intelligent and efficient task allocation mechanism in large-scale mobile edge computing (MEC), which can reduce delay and energy consumption in a parallel and distributed optimization. In this paper, we study the joint optimization model to consider cooperative task management mechanism among mobile terminals (MT), macro cell base station (MBS), and multiple small cell base station (SBS) for large-scale MEC applications. We propose a parallel multi-block Alternating Direction Method of Multipliers (ADMM) based method to model both requirements of low delay and low energy consumption in the MEC system which formulates the task allocation under those requirements as a nonlinear 0-1 integer programming problem. To solve the optimization problem, we develop an efficient combination of conjugate gradient, Newton and linear search techniques based algorithm with Logarithmic Smoothing (for global variables updating) and the Cyclic Block coordinate Gradient Projection (CBGP, for local variables updating) methods, which can guarantee convergence and reduce computational complexity with a good scalability. Numerical results demonstrate the effectiveness of the proposed mechanism and it can effectively reduce delay and energy consumption for a large-scale MEC system.Comment: 15 pages,4 figures, resource management for large-scale MEC. arXiv admin note: text overlap with arXiv:2003.1284

    Deep Reinforcement Learning for Stochastic Computation Offloading in Digital Twin Networks

    Full text link
    The rapid development of Industrial Internet of Things (IIoT) requires industrial production towards digitalization to improve network efficiency. Digital Twin is a promising technology to empower the digital transformation of IIoT by creating virtual models of physical objects. However, the provision of network efficiency in IIoT is very challenging due to resource-constrained devices, stochastic tasks, and resources heterogeneity. Distributed resources in IIoT networks can be efficiently exploited through computation offloading to reduce energy consumption while enhancing data processing efficiency. In this paper, we first propose a new paradigm Digital Twin Networks (DTN) to build network topology and the stochastic task arrival model in IIoT systems. Then, we formulate the stochastic computation offloading and resource allocation problem to minimize the long-term energy efficiency. As the formulated problem is a stochastic programming problem, we leverage Lyapunov optimization technique to transform the original problem into a deterministic per-time slot problem. Finally, we present Asynchronous Actor-Critic (AAC) algorithm to find the optimal stochastic computation offloading policy. Illustrative results demonstrate that our proposed scheme is able to significantly outperforms the benchmarks.Comment: 10 page

    Edge Intelligence for Energy-efficient Computation Offloading and Resource Allocation in 5G Beyond

    Full text link
    5G beyond is an end-edge-cloud orchestrated network that can exploit heterogeneous capabilities of the end devices, edge servers, and the cloud and thus has the potential to enable computation-intensive and delay-sensitive applications via computation offloading. However, in multi user wireless networks, diverse application requirements and the possibility of various radio access modes for communication among devices make it challenging to design an optimal computation offloading scheme. In addition, having access to complete network information that includes variables such as wireless channel state, and available bandwidth and computation resources, is a major issue. Deep Reinforcement Learning (DRL) is an emerging technique to address such an issue with limited and less accurate network information. In this paper, we utilize DRL to design an optimal computation offloading and resource allocation strategy for minimizing system energy consumption. We first present a multi-user end-edge-cloud orchestrated network where all devices and base stations have computation capabilities. Then, we formulate the joint computation offloading and resource allocation problem as a Markov Decision Process (MDP) and propose a new DRL algorithm to minimize system energy consumption. Numerical results based on a real-world dataset demonstrate that the proposed DRL-based algorithm significantly outperforms the benchmark policies in terms of system energy consumption. Extensive simulations show that learning rate, discount factor, and number of devices have considerable influence on the performance of the proposed algorithm

    CL-ADMM: A Cooperative Learning Based Optimization Framework for Resource Management in MEC

    Full text link
    We consider the problem of intelligent and efficient resource management framework in mobile edge computing (MEC), which can reduce delay and energy consumption, featuring distributed optimization and efficient congestion avoidance mechanism. In this paper, we present a Cooperative Learning framework for resource management in MEC from an Alternating Direction Method of Multipliers (ADMM) perspective, called CL-ADMM framework. First, in order to caching task efficiently in a group, a novel task popularity estimating scheme is proposed, which is based on semi-Markov process model, then a greedy task cooperative caching mechanism has been established, which can effectively reduce delay and energy consumption. Secondly, for addressing group congestion, a dynamic task migration scheme based on cooperative improved Q-learning is proposed, which can effectively reduce delay and alleviate congestion. Thirdly, for minimizing delay and energy consumption for resources allocation in a group, we formulate it as an optimization problem with a large number of variables, and then exploit a novel ADMM based scheme to address this problem, which can reduce the complexity of problem with a new set of auxiliary variables, these sub-problems are all convex problems, and can be solved by using a primal-dual approach, guaranteeing its convergences. Then we prove that the convergence by using Lyapunov theory. Numerical results demonstrate the effectiveness of the CL-ADMM and it can effectively reduce delay and energy consumption for MEC.Comment: 17 pages, 11 figures, submitted to journa
    corecore