2,796 research outputs found
Task Graph offloading via Deep Reinforcement Learning in Mobile Edge Computing
Various mobile applications that comprise dependent tasks are gaining
widespread popularity and are increasingly complex. These applications often
have low-latency requirements, resulting in a significant surge in demand for
computing resources. With the emergence of mobile edge computing (MEC), it
becomes the most significant issue to offload the application tasks onto
small-scale devices deployed at the edge of the mobile network for obtaining a
high-quality user experience. However, since the environment of MEC is dynamic,
most existing works focusing on task graph offloading, which rely heavily on
expert knowledge or accurate analytical models, fail to fully adapt to such
environmental changes, resulting in the reduction of user experience. This
paper investigates the task graph offloading in MEC, considering the
time-varying computation capabilities of edge computing devices. To adapt to
environmental changes, we model the task graph scheduling for computation
offloading as a Markov Decision Process (MDP). Then, we design a deep
reinforcement learning algorithm (SATA-DRL) to learn the task scheduling
strategy from the interaction with the environment, to improve user experience.
Extensive simulations validate that SATA-DRL is superior to existing strategies
in terms of reducing average makespan and deadline violation.Comment: 13 figure
Stacked Auto Encoder Based Deep Reinforcement Learning for Online Resource Scheduling in Large-Scale MEC Networks
An online resource scheduling framework is proposed for minimizing the sum of weighted task latency for all the Internet-of-Things (IoT) users, by optimizing offloading decision, transmission power, and resource allocation in the large-scale mobile-edge computing (MEC) system. Toward this end, a deep reinforcement learning (DRL)-based solution is proposed, which includes the following components. First, a related and regularized stacked autoencoder (2r-SAE) with unsupervised learning is applied to perform data compression and representation for high-dimensional channel quality information (CQI) data, which can reduce the state space for DRL. Second, we present an adaptive simulated annealing approach (ASA) as the action search method of DRL, in which an adaptive h -mutation is used to guide the search direction and an adaptive iteration is proposed to enhance the search efficiency during the DRL process. Third, a preserved and prioritized experience replay (2p-ER) is introduced to assist the DRL to train the policy network and find the optimal offloading policy. The numerical results are provided to demonstrate that the proposed algorithm can achieve near-optimal performance while significantly decreasing the computational time compared with existing benchmarks
A Novel Cross Entropy Approach for Offloading Learning in Mobile Edge Computing
In this letter, we propose a novel offloading learning approach to compromise energy consumption and latency in a multi-tier network with mobile edge computing. In order to solve this integer programming problem, instead of using conventional optimization tools, we apply a cross entropy approach with iterative learning of the probability of elite solution samples. Compared to existing methods, the proposed one in this network permits a parallel computing architecture and is verified to be computationally very efficient. Specifically, it achieves performance close to the optimal and performs well with different choices of the values of hyperparameters in the proposed learning approach
Hyperprofile-based Computation Offloading for Mobile Edge Networks
In recent studies, researchers have developed various computation offloading
frameworks for bringing cloud services closer to the user via edge networks.
Specifically, an edge device needs to offload computationally intensive tasks
because of energy and processing constraints. These constraints present the
challenge of identifying which edge nodes should receive tasks to reduce
overall resource consumption. We propose a unique solution to this problem
which incorporates elements from Knowledge-Defined Networking (KDN) to make
intelligent predictions about offloading costs based on historical data. Each
server instance can be represented in a multidimensional feature space where
each dimension corresponds to a predicted metric. We compute features for a
"hyperprofile" and position nodes based on the predicted costs of offloading a
particular task. We then perform a k-Nearest Neighbor (kNN) query within the
hyperprofile to select nodes for offloading computation. This paper formalizes
our hyperprofile-based solution and explores the viability of using machine
learning (ML) techniques to predict metrics useful for computation offloading.
We also investigate the effects of using different distance metrics for the
queries. Our results show various network metrics can be modeled accurately
with regression, and there are circumstances where kNN queries using Euclidean
distance as opposed to rectilinear distance is more favorable.Comment: 5 pages, NSF REU Site publicatio
- …