2,175 research outputs found
Joint Task Offloading and Resource Allocation for Multi-Server Mobile-Edge Computing Networks
Mobile-Edge Computing (MEC) is an emerging paradigm that provides a capillary
distribution of cloud computing capabilities to the edge of the wireless access
network, enabling rich services and applications in close proximity to the end
users. In this article, a MEC enabled multi-cell wireless network is considered
where each Base Station (BS) is equipped with a MEC server that can assist
mobile users in executing computation-intensive tasks via task offloading. The
problem of Joint Task Offloading and Resource Allocation (JTORA) is studied in
order to maximize the users' task offloading gains, which is measured by the
reduction in task completion time and energy consumption. The considered
problem is formulated as a Mixed Integer Non-linear Program (MINLP) that
involves jointly optimizing the task offloading decision, uplink transmission
power of mobile users, and computing resource allocation at the MEC servers.
Due to the NP-hardness of this problem, solving for optimal solution is
difficult and impractical for a large-scale network. To overcome this drawback,
our approach is to decompose the original problem into (i) a Resource
Allocation (RA) problem with fixed task offloading decision and (ii) a Task
Offloading (TO) problem that optimizes the optimal-value function corresponding
to the RA problem. We address the RA problem using convex and quasi-convex
optimization techniques, and propose a novel heuristic algorithm to the TO
problem that achieves a suboptimal solution in polynomial time. Numerical
simulation results show that our algorithm performs closely to the optimal
solution and that it significantly improves the users' offloading utility over
traditional approaches
Applications of Deep Reinforcement Learning in Communications and Networking: A Survey
This paper presents a comprehensive literature review on applications of deep
reinforcement learning in communications and networking. Modern networks, e.g.,
Internet of Things (IoT) and Unmanned Aerial Vehicle (UAV) networks, become
more decentralized and autonomous. In such networks, network entities need to
make decisions locally to maximize the network performance under uncertainty of
network environment. Reinforcement learning has been efficiently used to enable
the network entities to obtain the optimal policy including, e.g., decisions or
actions, given their states when the state and action spaces are small.
However, in complex and large-scale networks, the state and action spaces are
usually large, and the reinforcement learning may not be able to find the
optimal policy in reasonable time. Therefore, deep reinforcement learning, a
combination of reinforcement learning with deep learning, has been developed to
overcome the shortcomings. In this survey, we first give a tutorial of deep
reinforcement learning from fundamental concepts to advanced models. Then, we
review deep reinforcement learning approaches proposed to address emerging
issues in communications and networking. The issues include dynamic network
access, data rate control, wireless caching, data offloading, network security,
and connectivity preservation which are all important to next generation
networks such as 5G and beyond. Furthermore, we present applications of deep
reinforcement learning for traffic routing, resource sharing, and data
collection. Finally, we highlight important challenges, open issues, and future
research directions of applying deep reinforcement learning.Comment: 37 pages, 13 figures, 6 tables, 174 reference paper
Computation Rate Maximization in UAV-Enabled Wireless Powered Mobile-Edge Computing Systems
Mobile edge computing (MEC) and wireless power transfer (WPT) are two
promising techniques to enhance the computation capability and to prolong the
operational time of low-power wireless devices that are ubiquitous in Internet
of Things. However, the computation performance and the harvested energy are
significantly impacted by the severe propagation loss. In order to address this
issue, an unmanned aerial vehicle (UAV)-enabled MEC wireless powered system is
studied in this paper. The computation rate maximization problems in a
UAV-enabled MEC wireless powered system are investigated under both partial and
binary computation offloading modes, subject to the energy harvesting causal
constraint and the UAV's speed constraint. These problems are non-convex and
challenging to solve. A two-stage algorithm and a three-stage alternative
algorithm are respectively proposed for solving the formulated problems. The
closed-form expressions for the optimal central processing unit frequencies,
user offloading time, and user transmit power are derived. The optimal
selection scheme on whether users choose to locally compute or offload
computation tasks is proposed for the binary computation offloading mode.
Simulation results show that our proposed resource allocation schemes
outperforms other benchmark schemes. The results also demonstrate that the
proposed schemes converge fast and have low computational complexity.Comment: This paper has been accepted by IEEE JSA
A Survey on Mobile Edge Networks: Convergence of Computing, Caching and Communications
As the explosive growth of smart devices and the advent of many new
applications, traffic volume has been growing exponentially. The traditional
centralized network architecture cannot accommodate such user demands due to
heavy burden on the backhaul links and long latency. Therefore, new
architectures which bring network functions and contents to the network edge
are proposed, i.e., mobile edge computing and caching. Mobile edge networks
provide cloud computing and caching capabilities at the edge of cellular
networks. In this survey, we make an exhaustive review on the state-of-the-art
research efforts on mobile edge networks. We first give an overview of mobile
edge networks including definition, architecture and advantages. Next, a
comprehensive survey of issues on computing, caching and communication
techniques at the network edge is presented respectively. The applications and
use cases of mobile edge networks are discussed. Subsequently, the key enablers
of mobile edge networks such as cloud technology, SDN/NFV and smart devices are
discussed. Finally, open research challenges and future directions are
presented as well
Delay-aware Resource Allocation in Fog-assisted IoT Networks Through Reinforcement Learning
Fog nodes in the vicinity of IoT devices are promising to provision low
latency services by offloading tasks from IoT devices to them. Mobile IoT is
composed by mobile IoT devices such as vehicles, wearable devices and
smartphones. Owing to the time-varying channel conditions, traffic loads and
computing loads, it is challenging to improve the quality of service (QoS) of
mobile IoT devices. As task delay consists of both the transmission delay and
computing delay, we investigate the resource allocation (i.e., including both
radio resource and computation resource) in both the wireless channel and fog
node to minimize the delay of all tasks while their QoS constraints are
satisfied. We formulate the resource allocation problem into an integer
non-linear problem, where both the radio resource and computation resource are
taken into account. As IoT tasks are dynamic, the resource allocation for
different tasks are coupled with each other and the future information is
impractical to be obtained. Therefore, we design an on-line reinforcement
learning algorithm to make the sub-optimal decision in real time based on the
system's experience replay data. The performance of the designed algorithm has
been demonstrated by extensive simulation results
Decentralized Computation Offloading and Resource Allocation in Heterogeneous Networks with Mobile Edge Computing
We consider a heterogeneous network with mobile edge computing, where a user
can offload its computation to one among multiple servers. In particular, we
minimize the system-wide computation overhead by jointly optimizing the
individual computation decisions, transmit power of the users, and computation
resource at the servers. The crux of the problem lies in the combinatorial
nature of multi-user offloading decisions, the complexity of the optimization
objective, and the existence of inter-cell interference. Then, we decompose the
underlying problem into two subproblems: i) the offloading decision, which
includes two phases of user association and subchannel assignment, and ii)
joint resource allocation, which can be further decomposed into the problems of
transmit power and computation resource allocation. To enable distributed
computation offloading, we sequentially apply a many-to-one matching game for
user association and a one-to-one matching game for subchannel assignment.
Moreover, the transmit power of offloading users is found using a bisection
method with approximate inter-cell interference, and the computation resources
allocated to offloading users is achieved via the duality approach. The
proposed algorithm is shown to converge and is stable. Finally, we provide
simulations to validate the performance of the proposed algorithm as well as
comparisons with the existing frameworks.Comment: Submitted to IEEE Journa
TARCO: Two-Stage Auction for D2D Relay Aided Computation Resource Allocation in Hetnet
In heterogeneous cellular network, task scheduling for computation offloading
is one of the biggest challenges. Most works focus on alleviating heavy burden
of macro base stations by moving the computation tasks on macro-cell user
equipment (MUE) to remote cloud or small-cell base stations. But the
selfishness of network users is seldom considered. Motivated by the cloud edge
computing, this paper provides incentive for task transfer from macro cell
users to small cell base stations. The proposed incentive scheme utilizes small
cell user equipment to provide relay service. The problem of computation
offloading is modelled as a two-stage auction, in which the remote MUEs with
common social character can form a group and then buy the computation resource
of small-cell base stations with the relay of small cell user equipment. A
two-stage auction scheme named TARCO is contributed to maximize utilities for
both sellers and buyers in the network. The truthful, individual rationality
and budget balance of the TARCO are also proved in this paper. In addition, two
algorithms are proposed to further refine TARCO on the social welfare of the
network. Extensive simulation results demonstrate that, TARCO is better than
random algorithm by about 104.90% in terms of average utility of MUEs, while
the performance of TARCO is further improved up to 28.75% and 17.06% by the
proposed two algorithms, respectively.Comment: 22 pages, 9 figures, Working paper, SUBMITTED to IEEE TRANSACTIONS ON
SERVICES COMPUTIN
Information-Centric Wireless Networks with Mobile Edge Computing
In order to better accommodate the dramatically increasing demand for data
caching and computing services, storage and computation capabilities should be
endowed to some of the intermediate nodes within the network. In this paper, we
design a novel virtualized heterogeneous networks framework aiming at enabling
content caching and computing. With the virtualization of the whole system, the
communication, computing and caching resources can be shared among all users
associated with different virtual service providers. We formulate the virtual
resource allocation strategy as a joint optimization problem, where the gains
of not only virtualization but also caching and computing are taken into
consideration in the proposed architecture. In addition, a distributed
algorithm based on alternating direction method of multipliers is adopted to
solve the formulated problem, in order to reduce the computational complexity
and signaling overhead. Finally, extensive simulations are presented to show
the effectiveness of the proposed scheme under different system parameters
Hierarchical Fog-Cloud Computing for IoT Systems: A Computation Offloading Game
Fog computing, which provides low-latency computing services at the network
edge, is an enabler for the emerging Internet of Things (IoT) systems. In this
paper, we study the allocation of fog computing resources to the IoT users in a
hierarchical computing paradigm including fog and remote cloud computing
services. We formulate a computation offloading game to model the competition
between IoT users and allocate the limited processing power of fog nodes
efficiently. Each user aims to maximize its own quality of experience (QoE),
which reflects its satisfaction of using computing services in terms of the
reduction in computation energy and delay. Utilizing a potential game approach,
we prove the existence of a pure Nash equilibrium and provide an upper bound
for the price of anarchy. Since the time complexity to reach the equilibrium
increases exponentially in the number of users, we further propose a
near-optimal resource allocation mechanism and prove that in a system with
IoT users, it can achieve an -Nash equilibrium in
time. Through numerical studies, we evaluate the users' QoE as well as the
equilibrium efficiency. Our results reveal that by utilizing the proposed
mechanism, more users benefit from computing services in comparison to an
existing offloading mechanism. We further show that our proposed mechanism
significantly reduces the computation delay and enables low-latency fog
computing services for delay-sensitive IoT applications
Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning
To improve the quality of computation experience for mobile devices,
mobile-edge computing (MEC) is a promising paradigm by providing computing
capabilities in close proximity within a sliced radio access network (RAN),
which supports both traditional communication and MEC services. Nevertheless,
the design of computation offloading policies for a virtual MEC system remains
challenging. Specifically, whether to execute a computation task at the mobile
device or to offload it for MEC server execution should adapt to the
time-varying network dynamics. In this paper, we consider MEC for a
representative mobile user in an ultra-dense sliced RAN, where multiple base
stations (BSs) are available to be selected for computation offloading. The
problem of solving an optimal computation offloading policy is modelled as a
Markov decision process, where our objective is to maximize the long-term
utility performance whereby an offloading decision is made based on the task
queue state, the energy queue state as well as the channel qualities between MU
and BSs. To break the curse of high dimensionality in state space, we first
propose a double deep Q-network (DQN) based strategic computation offloading
algorithm to learn the optimal policy without knowing a priori knowledge of
network dynamics. Then motivated by the additive structure of the utility
function, a Q-function decomposition technique is combined with the double DQN,
which leads to novel learning algorithm for the solving of stochastic
computation offloading. Numerical experiments show that our proposed learning
algorithms achieve a significant improvement in computation offloading
performance compared with the baseline policies
- …