2,052 research outputs found
A Deep Reinforcement Learning Based Approach for Cost- and Energy-Aware Multi-Flow Mobile Data Offloading
With the rapid increase in demand for mobile data, mobile network operators
are trying to expand wireless network capacity by deploying wireless local area
network (LAN) hotspots on to which they can offload their mobile traffic.
However, these network-centric methods usually do not fulfill the interests of
mobile users (MUs). Taking into consideration many issues such as different
applications' deadlines, monetary cost and energy consumption, how the MU
decides whether to offload their traffic to a complementary wireless LAN is an
important issue. Previous studies assume the MU's mobility pattern is known in
advance, which is not always true. In this paper, we study the MU's policy to
minimize his monetary cost and energy consumption without known MU mobility
pattern. We propose to use a kind of reinforcement learning technique called
deep Q-network (DQN) for MU to learn the optimal offloading policy from past
experiences. In the proposed DQN based offloading algorithm, MU's mobility
pattern is no longer needed. Furthermore, MU's state of remaining data is
directly fed into the convolution neural network in DQN without discretization.
Therefore, not only does the discretization error present in previous work
disappear, but also it makes the proposed algorithm has the ability to
generalize the past experiences, which is especially effective when the number
of states is large. Extensive simulations are conducted to validate our
proposed offloading algorithms.Comment: 9 pages, 8 figures. arXiv admin note: substantial text overlap with
arXiv:1801.1015
Applications of Deep Reinforcement Learning in Communications and Networking: A Survey
This paper presents a comprehensive literature review on applications of deep
reinforcement learning in communications and networking. Modern networks, e.g.,
Internet of Things (IoT) and Unmanned Aerial Vehicle (UAV) networks, become
more decentralized and autonomous. In such networks, network entities need to
make decisions locally to maximize the network performance under uncertainty of
network environment. Reinforcement learning has been efficiently used to enable
the network entities to obtain the optimal policy including, e.g., decisions or
actions, given their states when the state and action spaces are small.
However, in complex and large-scale networks, the state and action spaces are
usually large, and the reinforcement learning may not be able to find the
optimal policy in reasonable time. Therefore, deep reinforcement learning, a
combination of reinforcement learning with deep learning, has been developed to
overcome the shortcomings. In this survey, we first give a tutorial of deep
reinforcement learning from fundamental concepts to advanced models. Then, we
review deep reinforcement learning approaches proposed to address emerging
issues in communications and networking. The issues include dynamic network
access, data rate control, wireless caching, data offloading, network security,
and connectivity preservation which are all important to next generation
networks such as 5G and beyond. Furthermore, we present applications of deep
reinforcement learning for traffic routing, resource sharing, and data
collection. Finally, we highlight important challenges, open issues, and future
research directions of applying deep reinforcement learning.Comment: 37 pages, 13 figures, 6 tables, 174 reference paper
Edge Intelligence: The Confluence of Edge Computing and Artificial Intelligence
Along with the rapid developments in communication technologies and the surge
in the use of mobile devices, a brand-new computation paradigm, Edge Computing,
is surging in popularity. Meanwhile, Artificial Intelligence (AI) applications
are thriving with the breakthroughs in deep learning and the many improvements
in hardware architectures. Billions of data bytes, generated at the network
edge, put massive demands on data processing and structural optimization. Thus,
there exists a strong demand to integrate Edge Computing and AI, which gives
birth to Edge Intelligence. In this paper, we divide Edge Intelligence into AI
for edge (Intelligence-enabled Edge Computing) and AI on edge (Artificial
Intelligence on Edge). The former focuses on providing more optimal solutions
to key problems in Edge Computing with the help of popular and effective AI
technologies while the latter studies how to carry out the entire process of
building AI models, i.e., model training and inference, on the edge. This paper
provides insights into this new inter-disciplinary field from a broader
perspective. It discusses the core concepts and the research road-map, which
should provide the necessary background for potential future research
initiatives in Edge Intelligence.Comment: 13 pages, 3 figure
Decentralized Computation Offloading for Multi-User Mobile Edge Computing: A Deep Reinforcement Learning Approach
Mobile edge computing (MEC) emerges recently as a promising solution to
relieve resource-limited mobile devices from computation-intensive tasks, which
enables devices to offload workloads to nearby MEC servers and improve the
quality of computation experience. Nevertheless, by considering a MEC system
consisting of multiple mobile users with stochastic task arrivals and wireless
channels in this paper, the design of computation offloading policies is
challenging to minimize the long-term average computation cost in terms of
power consumption and buffering delay. A deep reinforcement learning (DRL)
based decentralized dynamic computation offloading strategy is investigated to
build a scalable MEC system with limited feedback. Specifically, a continuous
action space-based DRL approach named deep deterministic policy gradient (DDPG)
is adopted to learn efficient computation offloading policies independently at
each mobile user. Thus, powers of both local execution and task offloading can
be adaptively allocated by the learned policies from each user's local
observation of the MEC system. Numerical results are illustrated to demonstrate
that efficient policies can be learned at each user, and performance of the
proposed DDPG based decentralized strategy outperforms the conventional deep
Q-network (DQN) based discrete power control strategy and some other greedy
strategies with reduced computation cost. Besides, the power-delay tradeoff is
also analyzed for both the DDPG based and DQN based strategies
Mobile Edge Computation Offloading Using Game Theory and Reinforcement Learning
Due to the ever-increasing popularity of resource-hungry and
delay-constrained mobile applications, the computation and storage capabilities
of remote cloud has partially migrated towards the mobile edge, giving rise to
the concept known as Mobile Edge Computing (MEC). While MEC servers enjoy the
close proximity to the end-users to provide services at reduced latency and
lower energy costs, they suffer from limitations in computational and radio
resources, which calls for fair efficient resource management in the MEC
servers. The problem is however challenging due to the ultra-high density,
distributed nature, and intrinsic randomness of next generation wireless
networks. In this article, we focus on the application of game theory and
reinforcement learning for efficient distributed resource management in MEC, in
particular, for computation offloading. We briefly review the cutting-edge
research and discuss future challenges. Furthermore, we develop a
game-theoretical model for energy-efficient distributed edge server activation
and study several learning techniques. Numerical results are provided to
illustrate the performance of these distributed learning techniques. Also, open
research issues in the context of resource management in MEC servers are
discussed
Security in Mobile Edge Caching with Reinforcement Learning
Mobile edge computing usually uses cache to support multimedia contents in 5G
mobile Internet to reduce the computing overhead and latency. Mobile edge
caching (MEC) systems are vulnerable to various attacks such as denial of
service attacks and rogue edge attacks. This article investigates the attack
models in MEC systems, focusing on both the mobile offloading and the caching
procedures. In this paper, we propose security solutions that apply
reinforcement learning (RL) techniques to provide secure offloading to the edge
nodes against jamming attacks. We also present light-weight authentication and
secure collaborative caching schemes to protect data privacy. We evaluate the
performance of the RL-based security solution for mobile edge caching and
discuss the challenges that need to be addressed in the future
Application of Machine Learning in Wireless Networks: Key Techniques and Open Issues
As a key technique for enabling artificial intelligence, machine learning
(ML) is capable of solving complex problems without explicit programming.
Motivated by its successful applications to many practical tasks like image
recognition, both industry and the research community have advocated the
applications of ML in wireless communication. This paper comprehensively
surveys the recent advances of the applications of ML in wireless
communication, which are classified as: resource management in the MAC layer,
networking and mobility management in the network layer, and localization in
the application layer. The applications in resource management further include
power control, spectrum management, backhaul management, cache management,
beamformer design and computation resource management, while ML based
networking focuses on the applications in clustering, base station switching
control, user association and routing. Moreover, literatures in each aspect is
organized according to the adopted ML techniques. In addition, several
conditions for applying ML to wireless communication are identified to help
readers decide whether to use ML and which kind of ML techniques to use, and
traditional approaches are also summarized together with their performance
comparison with ML based approaches, based on which the motivations of surveyed
literatures to adopt ML are clarified. Given the extensiveness of the research
area, challenges and unresolved issues are presented to facilitate future
studies, where ML based network slicing, infrastructure update to support ML
based paradigms, open data sets and platforms for researchers, theoretical
guidance for ML implementation and so on are discussed.Comment: 34 pages,8 figure
Delay-aware Resource Allocation in Fog-assisted IoT Networks Through Reinforcement Learning
Fog nodes in the vicinity of IoT devices are promising to provision low
latency services by offloading tasks from IoT devices to them. Mobile IoT is
composed by mobile IoT devices such as vehicles, wearable devices and
smartphones. Owing to the time-varying channel conditions, traffic loads and
computing loads, it is challenging to improve the quality of service (QoS) of
mobile IoT devices. As task delay consists of both the transmission delay and
computing delay, we investigate the resource allocation (i.e., including both
radio resource and computation resource) in both the wireless channel and fog
node to minimize the delay of all tasks while their QoS constraints are
satisfied. We formulate the resource allocation problem into an integer
non-linear problem, where both the radio resource and computation resource are
taken into account. As IoT tasks are dynamic, the resource allocation for
different tasks are coupled with each other and the future information is
impractical to be obtained. Therefore, we design an on-line reinforcement
learning algorithm to make the sub-optimal decision in real time based on the
system's experience replay data. The performance of the designed algorithm has
been demonstrated by extensive simulation results
Intelligent networking with Mobile Edge Computing: Vision and Challenges for Dynamic Network Scheduling
Mobile edge computing (MEC) has been considered as a promising technique for
internet of things (IoT). By deploying edge servers at the proximity of
devices, it is expected to provide services and process data at a relatively
low delay by intelligent networking. However, the vast edge servers may face
great challenges in terms of cooperation and resource allocation. Furthermore,
intelligent networking requires online implementation in distributed mode. In
such kinds of systems, the network scheduling can not follow any previously
known rule due to complicated application environment. Then statistical
learning rises up as a promising technique for network scheduling, where edges
dynamically learn environmental elements with cooperations. It is expected such
learning based methods may relieve deficiency of model limitations, which
enhance their practical use in dynamic network scheduling. In this paper, we
investigate the vision and challenges of the intelligent IoT networking with
mobile edge computing. From the systematic viewpoint, some major research
opportunities are enumerated with respect to statistical learning
Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A Multi-Agent Deep Reinforcement Learning Approach
In recent years, multi-access edge computing (MEC) is a key enabler for
handling the massive expansion of Internet of Things (IoT) applications and
services. However, energy consumption of a MEC network depends on volatile
tasks that induces risk for energy demand estimations. As an energy supplier, a
microgrid can facilitate seamless energy supply. However, the risk associated
with energy supply is also increased due to unpredictable energy generation
from renewable and non-renewable sources. Especially, the risk of energy
shortfall is involved with uncertainties in both energy consumption and
generation. In this paper, we study a risk-aware energy scheduling problem for
a microgrid-powered MEC network. First, we formulate an optimization problem
considering the conditional value-at-risk (CVaR) measurement for both energy
consumption and generation, where the objective is to minimize the expected
residual of scheduled energy for the MEC networks and we show this problem is
an NP-hard problem. Second, we analyze our formulated problem using a
multi-agent stochastic game that ensures the joint policy Nash equilibrium, and
show the convergence of the proposed model. Third, we derive the solution by
applying a multi-agent deep reinforcement learning (MADRL)-based asynchronous
advantage actor-critic (A3C) algorithm with shared neural networks. This method
mitigates the curse of dimensionality of the state space and chooses the best
policy among the agents for the proposed problem. Finally, the experimental
results establish a significant performance gain by considering CVaR for high
accuracy energy scheduling of the proposed model than both the single and
random agent models.Comment: Accepted Article BY IEEE Transactions on Network and Service
Management, DOI: 10.1109/TNSM.2021.304938
- …