2,657 research outputs found
Computation Rate Maximization in UAV-Enabled Wireless Powered Mobile-Edge Computing Systems
Mobile edge computing (MEC) and wireless power transfer (WPT) are two
promising techniques to enhance the computation capability and to prolong the
operational time of low-power wireless devices that are ubiquitous in Internet
of Things. However, the computation performance and the harvested energy are
significantly impacted by the severe propagation loss. In order to address this
issue, an unmanned aerial vehicle (UAV)-enabled MEC wireless powered system is
studied in this paper. The computation rate maximization problems in a
UAV-enabled MEC wireless powered system are investigated under both partial and
binary computation offloading modes, subject to the energy harvesting causal
constraint and the UAV's speed constraint. These problems are non-convex and
challenging to solve. A two-stage algorithm and a three-stage alternative
algorithm are respectively proposed for solving the formulated problems. The
closed-form expressions for the optimal central processing unit frequencies,
user offloading time, and user transmit power are derived. The optimal
selection scheme on whether users choose to locally compute or offload
computation tasks is proposed for the binary computation offloading mode.
Simulation results show that our proposed resource allocation schemes
outperforms other benchmark schemes. The results also demonstrate that the
proposed schemes converge fast and have low computational complexity.Comment: This paper has been accepted by IEEE JSA
Computation Rate Maximization for Wireless Powered Mobile-Edge Computing with Binary Computation Offloading
In this paper, we consider a multi-user mobile edge computing (MEC) network
powered by wireless power transfer (WPT), where each energy-harvesting WD
follows a binary computation offloading policy, i.e., data set of a task has to
be executed as a whole either locally or remotely at the MEC server via task
offloading. In particular, we are interested in maximizing the (weighted) sum
computation rate of all the WDs in the network by jointly optimizing the
individual computing mode selection (i.e., local computing or offloading) and
the system transmission time allocation (on WPT and task offloading). The major
difficulty lies in the combinatorial nature of multi-user computing mode
selection and its strong coupling with transmission time allocation. To tackle
this problem, we first consider a decoupled optimization, where we assume that
the mode selection is given and propose a simple bi-section search algorithm to
obtain the conditional optimal time allocation. On top of that, a coordinate
descent method is devised to optimize the mode selection. The method is simple
in implementation but may suffer from high computational complexity in a
large-size network. To address this problem, we further propose a joint
optimization method based on the ADMM (alternating direction method of
multipliers) decomposition technique, which enjoys much slower increase of
computational complexity as the networks size increases. Extensive simulations
show that both the proposed methods can efficiently achieve near-optimal
performance under various network setups, and significantly outperform the
other representative benchmark methods considered.Comment: This paper has been accepted for publication in IEEE Transactions on
Wireless Communication
Optimal Resource Allocation for Wireless Powered Mobile Edge Computing with Dynamic Task Arrivals
This paper considers a wireless powered multiuser mobile edge computing (MEC)
system, where a multi-antenna access point (AP) employs the radio-frequency
(RF) signal based wireless power transfer (WPT) to charge a number of
distributed users, and each user utilizes the harvested energy to execute
computation tasks via local computing and task offloading. We consider the
frequency division multiple access (FDMA) protocol to support simultaneous task
offloading from multiple users to the AP. Different from previous works that
considered one-shot optimization with static task models, we study the joint
computation and wireless resource allocation optimization with dynamic task
arrivals over a finite time horizon consisting of multiple slots. Under this
setup, our objective is to minimize the system energy consumption including the
AP's transmission energy and the MEC server's computing energy over the whole
horizon, by jointly optimizing the transmit energy beamforming at the AP, and
the local computing and task offloading strategies at the users over different
time slots. To characterize the fundamental performance limit of such systems,
we focus on the offline optimization by assuming the task and channel
information are known a-priori at the AP. In this case, the energy minimization
problem corresponds to a convex optimization problem. Leveraging the Lagrange
duality method, we obtain the optimal solution to this problem in a well
structure. It is shown that in order to maximize the system energy efficiency,
the optimal number of task input-bits at each user and the AP are monotonically
increasing over time, and the offloading strategies at different users depend
on both the wireless channel conditions and the task load at the AP. Numerical
results demonstrate the benefit of the proposed joint-WPT-MEC design over
alternative benchmark schemes without such joint design.Comment: 7 pages, 3 figures, and Accepted by IEEE ICC 2019, Shanghai, Chin
Computation Rate Maximization for Wireless Powered Mobile Edge Computing
Integrating mobile edge computing (MEC) and wireless power transfer (WPT) has
been regarded as a promising technique to improve computation capabilities for
self-sustainable Internet of Things (IoT) devices. This paper investigates a
wireless powered multiuser MEC system, where a multi-antenna access point (AP)
(integrated with an MEC server) broadcasts wireless power to charge multiple
users for mobile computing. We consider a time-division multiple access (TDMA)
protocol for multiuser computation offloading. Under this setup, we aim to
maximize the weighted sum of the computation rates (in terms of the number of
computation bits) across all the users, by jointly optimizing the energy
transmit beamformer at the AP, the task partition for the users (for local
computing and offloading, respectively), and the time allocation among the
users. We derive the optimal solution in a semi-closed form via convex
optimization techniques. Numerical results show the merit of the proposed
design over alternative benchmark schemes.Comment: 6 pages and 2 figure
Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning
To improve the quality of computation experience for mobile devices,
mobile-edge computing (MEC) is a promising paradigm by providing computing
capabilities in close proximity within a sliced radio access network (RAN),
which supports both traditional communication and MEC services. Nevertheless,
the design of computation offloading policies for a virtual MEC system remains
challenging. Specifically, whether to execute a computation task at the mobile
device or to offload it for MEC server execution should adapt to the
time-varying network dynamics. In this paper, we consider MEC for a
representative mobile user in an ultra-dense sliced RAN, where multiple base
stations (BSs) are available to be selected for computation offloading. The
problem of solving an optimal computation offloading policy is modelled as a
Markov decision process, where our objective is to maximize the long-term
utility performance whereby an offloading decision is made based on the task
queue state, the energy queue state as well as the channel qualities between MU
and BSs. To break the curse of high dimensionality in state space, we first
propose a double deep Q-network (DQN) based strategic computation offloading
algorithm to learn the optimal policy without knowing a priori knowledge of
network dynamics. Then motivated by the additive structure of the utility
function, a Q-function decomposition technique is combined with the double DQN,
which leads to novel learning algorithm for the solving of stochastic
computation offloading. Numerical experiments show that our proposed learning
algorithms achieve a significant improvement in computation offloading
performance compared with the baseline policies
Resource Allocation in Full-Duplex Mobile-Edge Computing Systems with NOMA and Energy Harvesting
This paper considers a full-duplex (FD) mobile-edge computing (MEC) system
with non-orthogonal multiple access (NOMA) and energy harvesting (EH), where
one group of users simultaneously offload task data to the base station (BS)
via NOMA and the BS simultaneously receive data and broadcast energy to other
group of users with FD. We aim at minimizing the total energy consumption of
the system via power control, time scheduling and computation capacity
allocation. To solve this nonconvex problem, we first transform it into an
equivalent problem with less variables. The equivalent problem is shown to be
convex in each vector with the other two vectors fixed, which allows us to
design an iterative algorithm with low complexity. Simulation results show that
the proposed algorithm achieves better performance than the conventional
methods
Multiuser Computation Offloading and Downloading for Edge Computing with Virtualization
Mobile-edge computing (MEC) is an emerging technology for enhancing the
computational capabilities of mobile devices and reducing their energy
consumption via offloading complex computation tasks to the nearby servers.
Multiuser MEC at servers is widely realized via parallel computing based on
virtualization. Due to finite shared I/O resources, interference between
virtual machines (VMs), called I/O interference, degrades the computation
performance. In this paper, we study the problem of joint radio-and-computation
resource allocation (RCRA) in multiuser MEC systems in the presence of I/O
interference. Specifically, offloading scheduling algorithms are designed
targeting two system performance metrics: sum offloading throughput
maximization and sum mobile energy consumption minimization. Their designs are
formulated as non-convex mixed-integer programming problems, which account for
latency due to offloading, result downloading and parallel computing. A set of
low-complexity algorithms are designed based on a decomposition approach and
leveraging classic techniques from combinatorial optimization. The resultant
algorithms jointly schedule offloading users, control their offloading sizes,
and divide time for communication (offloading and downloading) and computation.
They are either optimal or can achieve close-to-optimality as shown by
simulation. Comprehensive simulation results demonstrate considering of I/O
interference can endow on an offloading controller robustness against the
performance-degradation factor
Dynamic Computation Offloading for Mobile-Edge Computing with Energy Harvesting Devices
Mobile-edge computing (MEC) is an emerging paradigm to meet the
ever-increasing computation demands from mobile applications. By offloading the
computationally intensive workloads to the MEC server, the quality of
computation experience, e.g., the execution latency, could be greatly improved.
Nevertheless, as the on-device battery capacities are limited, computation
would be interrupted when the battery energy runs out. To provide satisfactory
computation performance as well as achieving green computing, it is of
significant importance to seek renewable energy sources to power mobile devices
via energy harvesting (EH) technologies. In this paper, we will investigate a
green MEC system with EH devices and develop an effective computation
offloading strategy. The execution cost, which addresses both the execution
latency and task failure, is adopted as the performance metric. A
low-complexity online algorithm, namely, the Lyapunov optimization-based
dynamic computation offloading (LODCO) algorithm is proposed, which jointly
decides the offloading decision, the CPU-cycle frequencies for mobile
execution, and the transmit power for computation offloading. A unique
advantage of this algorithm is that the decisions depend only on the
instantaneous side information without requiring distribution information of
the computation task request, the wireless channel, and EH processes. The
implementation of the algorithm only requires to solve a deterministic problem
in each time slot, for which the optimal solution can be obtained either in
closed form or by bisection search. Moreover, the proposed algorithm is shown
to be asymptotically optimal via rigorous analysis. Sample simulation results
shall be presented to verify the theoretical analysis as well as validate the
effectiveness of the proposed algorithm.Comment: 33 pages, 11 figures, submitted to IEEE Journal on Selected Areas in
Communication
Decentralized Computation Offloading for Multi-User Mobile Edge Computing: A Deep Reinforcement Learning Approach
Mobile edge computing (MEC) emerges recently as a promising solution to
relieve resource-limited mobile devices from computation-intensive tasks, which
enables devices to offload workloads to nearby MEC servers and improve the
quality of computation experience. Nevertheless, by considering a MEC system
consisting of multiple mobile users with stochastic task arrivals and wireless
channels in this paper, the design of computation offloading policies is
challenging to minimize the long-term average computation cost in terms of
power consumption and buffering delay. A deep reinforcement learning (DRL)
based decentralized dynamic computation offloading strategy is investigated to
build a scalable MEC system with limited feedback. Specifically, a continuous
action space-based DRL approach named deep deterministic policy gradient (DDPG)
is adopted to learn efficient computation offloading policies independently at
each mobile user. Thus, powers of both local execution and task offloading can
be adaptively allocated by the learned policies from each user's local
observation of the MEC system. Numerical results are illustrated to demonstrate
that efficient policies can be learned at each user, and performance of the
proposed DDPG based decentralized strategy outperforms the conventional deep
Q-network (DQN) based discrete power control strategy and some other greedy
strategies with reduced computation cost. Besides, the power-delay tradeoff is
also analyzed for both the DDPG based and DQN based strategies
Finite Horizon Throughput Maximization and Sensing Optimization in Wireless Powered Devices over Fading Channels
Wireless power transfer (WPT) is a promising technology that provides the
network a way to replenish the batteries of the remote devices by utilizing RF
transmissions. We study a class of harvest-first-transmit-later type of WPT
policy, where an access point (AP) first employs RF power transfer to recharge
a wireless powered device (WPD) for a certain period subjected to optimization,
and then, the harvested energy is subsequently used by the WPD to transmit its
data bits back to the AP over a finite horizon. A significant challenge
regarding the studied WPT scenario is the time-varying nature of the wireless
channel linking the WPD to the AP. We first investigate as a benchmark the
offline case where the channel realizations are known non-causally prior to the
starting of the horizon. For the offline case, by finding the optimal WPT
duration and power allocations in the data transmission period, we derive an
upper bound on the throughput of the WPD. We then focus on the online
counterpart of the problem where the channel realizations are known causally.
We prove that the optimal WPT duration obeys a time-dependent threshold form
depending on the energy state of the WPD. In the subsequent data transmission
stage, the optimal transmit power allocation for the WPD is shown to be of a
fractional structure where at each time slot a fraction of energy depending on
the current channel and a measure of future channel state expectations is
allocated for data transmission. We numerically show that the online policy
performs almost identical to the upper bound. We then consider a data sensing
application, where the WPD adjusts the sensing resolution to balance between
the quality of the sensed data and the probability of successfully delivering
it. We use Bayesian inference as a reinforcement learning method to provide a
mean for the WPD in learning to balance the sensing resolution.Comment: Single column, 31 page
- …