870 research outputs found
MDP-Based Scheduling Design for Mobile-Edge Computing Systems with Random User Arrival
In this paper, we investigate the scheduling design of a mobile-edge
computing (MEC) system, where the random arrival of mobile devices with
computation tasks in both spatial and temporal domains is considered. The
binary computation offloading model is adopted. Every task is indivisible and
can be computed at either the mobile device or the MEC server. We formulate the
optimization of task offloading decision, uplink transmission device selection
and power allocation in all the frames as an infinite-horizon Markov decision
process (MDP). Due to the uncertainty in device number and location,
conventional approximate MDP approaches to addressing the curse of
dimensionality cannot be applied. A novel low-complexity sub-optimal solution
framework is then proposed. We first introduce a baseline scheduling policy,
whose value function can be derived analytically. Then, one-step policy
iteration is adopted to obtain a sub-optimal scheduling policy whose
performance can be bounded analytically. Simulation results show that the gain
of the sub-optimal policy over various benchmarks is significant.Comment: 6 pages, 3 figures; accepted by Globecom 2019; title changed to
better describe the work, introduction condensed, typos correcte
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
Deep Reinforcement Learning for Resource Management in Network Slicing
Network slicing is born as an emerging business to operators, by allowing
them to sell the customized slices to various tenants at different prices. In
order to provide better-performing and cost-efficient services, network slicing
involves challenging technical issues and urgently looks forward to intelligent
innovations to make the resource management consistent with users' activities
per slice. In that regard, deep reinforcement learning (DRL), which focuses on
how to interact with the environment by trying alternative actions and
reinforcing the tendency actions producing more rewarding consequences, is
assumed to be a promising solution. In this paper, after briefly reviewing the
fundamental concepts of DRL, we investigate the application of DRL in solving
some typical resource management for network slicing scenarios, which include
radio resource slicing and priority-based core network slicing, and demonstrate
the advantage of DRL over several competing schemes through extensive
simulations. Finally, we also discuss the possible challenges to apply DRL in
network slicing from a general perspective.Comment: The manuscript has been accepted by IEEE Access in Nov. 201
- …