5,112 research outputs found
Coded Computing and Cooperative Transmission for Wireless Distributed Matrix Multiplication
Consider a multi-cell mobile edge computing network, in which each user
wishes to compute the product of a user-generated data matrix with a
network-stored matrix. This is done through task offloading by means of input
uploading, distributed computing at edge nodes (ENs), and output downloading.
Task offloading may suffer long delay since servers at some ENs may be
straggling due to random computation time, and wireless channels may experience
severe fading and interference. This paper aims to investigate the interplay
among upload, computation, and download latencies during the offloading process
in the high signal-to-noise ratio regime from an information-theoretic
perspective. A policy based on cascaded coded computing and on coordinated and
cooperative interference management in uplink and downlink is proposed and
proved to be approximately optimal for a sufficiently large upload time. By
investing more time in uplink transmission, the policy creates data redundancy
at the ENs, which can reduce the computation time, by enabling the use of coded
computing, as well as the download time via transmitter cooperation. Moreover,
the policy allows computation time to be traded for download time. Numerical
examples demonstrate that the proposed policy can improve over existing schemes
by significantly reducing the end-to-end execution time.Comment: To appear in IEEE Transactions on Communication
Multi-Task and Multi-Step Computation Offloading in Ultra-dense IoT Networks
With the rapid development of Internet of Things(IoT),various IoT mobile devices(IMDs) need to process more and more computing-intensive and delay-sensitive tasks,which puts forward new challenges for the mobile edge networks.To address these challenges,the MEC-equipped ultra-dense IoT has emerged.In such networks,IMDs can save their computation resources and reduce their energy consumption by offloading computing-intensive tasks to edge computing servers for processing.However,it will result in additional transmission time and higher delay.In view of this,an optimization problem is formulated for finding the trade-off between energy consumption and delay,which jointly considers the user(IMD) association,computation offloading and resource allocation for ultra-dense MEC-enabled IoT.To further balance the network load and fully utilize the computation resources,the optimization problem is finally modeled as multi-step computation offloading one.At last,an intelligent algorithm,adaptive particle swarm optimization(PSO),is utilized to solve the proposed problem.Compared with traditional PSO,the total cost of adaptive PSO reduces by 20%~65%
A Bilevel Optimization Approach for Joint Offloading Decision and Resource Allocation in Cooperative Mobile Edge Computing
This paper studies a multi-user cooperative mobile edge computing offloading (CoMECO) system in a multi-user interference environment, in which delay-sensitive tasks may be executed on local devices, cooperative devices, or the primary MEC server. In this system, we jointly optimize the offloading decision and computation resource allocation for minimizing the total energy consumption of all mobile users under the delay constraint. If this problem is solved directly, the offloading decision and computation resource allocation are generally generated separately at the same time. Note, however, that they are closely coupled. Therefore, under this condition, their dependency is not well considered, thus leading to poor performance. We transform this problem into a bilevel optimization problem, in which the offloading decision is generated in the upper level, and then the optimal allocation of computation resources is obtained in the lower level based on the given offloading decision. In this way, the dependency between the offloading decision and computation resource allocation can be fully taken into account. Subsequently, a bilevel optimization approach, called BiJOR, is proposed. In BiJOR, candidate modes are first pruned to reduce the number of infeasible offloading decisions. Afterward, the upper level optimization problem is solved by ant colony system (ACS). Furthermore, a sorting strategy is incorporated into ACS to construct feasible offloading decisions with a higher probability and a local search operator is designed in ACS to accelerate the convergence. For the lower level optimization problem, it is solved by the monotonic optimization method. In addition, BiJOR is extended to deal with a complex scenario with the channel selection. Extensive experiments are carried out to investigate the performance of BiJOR on two sets of instances with up to 400 mobile users. The experimental results demonstrate the effectiveness of BiJOR and the superiority of the CoMECO system
Recommended from our members
Decentralized computation offloading for multi-user mobile edge computing: a deep reinforcement learning approach
Mobile edge computing (MEC) emerges recently as a promising solution to relieve resource-limited mobile devices from computation-intensive tasks, which enables devices to offload workloads to nearby MEC servers and improve the quality of computation experience. In this paper, an MEC enabled multi-user multi-input multi-output (MIMO) system with stochastic wireless channels and task arrivals is considered. In order to minimize long-term average computation cost in terms of power consumption and buffering delay at each user, a deep reinforcement learning (DRL)-based dynamic computation offloading strategy is investigated to build a scalable system with limited feedback. Specifically, a continuous action space-based DRL approach named deep deterministic policy gradient (DDPG) is adopted to learn decentralized computation offloading policies at all users respectively, where local execution and task offloading powers will be adaptively allocated according to each user’s local observation. Numerical results demonstrate that the proposed DDPG-based strategy can help each user learn an efficient dynamic offloading policy and also verify the superiority of its continuous power allocation capability to policies learned by conventional discrete action space-based reinforcement learning approaches like deep Q-network (DQN) as well as some other greedy strategies with reduced computation cost. Besides, power-delay tradeoff for computation offloading is also analyzed for both the DDPG-based and DQN-based strategies
- …