1,871 research outputs found
Efficient Multi-User Computation Offloading for Mobile-Edge Cloud Computing
Mobile-edge cloud computing is a new paradigm to provide cloud computing
capabilities at the edge of pervasive radio access networks in close proximity
to mobile users. In this paper, we first study the multi-user computation
offloading problem for mobile-edge cloud computing in a multi-channel wireless
interference environment. We show that it is NP-hard to compute a centralized
optimal solution, and hence adopt a game theoretic approach for achieving
efficient computation offloading in a distributed manner. We formulate the
distributed computation offloading decision making problem among mobile device
users as a multi-user computation offloading game. We analyze the structural
property of the game and show that the game admits a Nash equilibrium and
possesses the finite improvement property. We then design a distributed
computation offloading algorithm that can achieve a Nash equilibrium, derive
the upper bound of the convergence time, and quantify its efficiency ratio over
the centralized optimal solutions in terms of two important performance
metrics. We further extend our study to the scenario of multi-user computation
offloading in the multi-channel wireless contention environment. Numerical
results corroborate that the proposed algorithm can achieve superior
computation offloading performance and scale well as the user size increases.Comment: The paper has been accepted by IEEE/ACM Transactions on Networking,
Sept. 2015. arXiv admin note: substantial text overlap with arXiv:1404.320
Joint Optimization of Radio Resources and Code Partitioning in Mobile Edge Computing
The aim of this paper is to propose a computation offloading strategy for
mobile edge computing. We exploit the concept of call graph, which models a
generic computer program as a set of procedures related to each other through a
weighted directed graph. Our goal is to derive the optimal partition of the
call graph establishing which procedures are to be executed locally or
remotely. The main novelty of our work is that the optimal partition is
obtained jointly with the selection of radio parameters, e.g., transmit power
and constellation size, in order to minimize the energy consumption at the
mobile handset, under a latency constraint taking into account transmit time
and execution time. We consider both single and multi-channel transmission
strategies and we prove that a globally optimal solution can be achieved in
both cases. Finally, we propose a suboptimal strategy aimed at solving a
relaxed version of the original problem in order to tradeoff complexity and
performance of the proposed framework. Finally, several numerical results
illustrate under what conditions in terms of call graph topology, communication
strategy, and computation parameters, the proposed offloading strategy provides
large performance gains.Comment: Submitted to IEEE Transactions on Signal Processin
Edge Intelligence: Paving the Last Mile of Artificial Intelligence with Edge Computing
With the breakthroughs in deep learning, the recent years have witnessed a
booming of artificial intelligence (AI) applications and services, spanning
from personal assistant to recommendation systems to video/audio surveillance.
More recently, with the proliferation of mobile computing and
Internet-of-Things (IoT), billions of mobile and IoT devices are connected to
the Internet, generating zillions Bytes of data at the network edge. Driving by
this trend, there is an urgent need to push the AI frontiers to the network
edge so as to fully unleash the potential of the edge big data. To meet this
demand, edge computing, an emerging paradigm that pushes computing tasks and
services from the network core to the network edge, has been widely recognized
as a promising solution. The resulted new inter-discipline, edge AI or edge
intelligence, is beginning to receive a tremendous amount of interest. However,
research on edge intelligence is still in its infancy stage, and a dedicated
venue for exchanging the recent advances of edge intelligence is highly desired
by both the computer system and artificial intelligence communities. To this
end, we conduct a comprehensive survey of the recent research efforts on edge
intelligence. Specifically, we first review the background and motivation for
artificial intelligence running at the network edge. We then provide an
overview of the overarching architectures, frameworks and emerging key
technologies for deep learning model towards training/inference at the network
edge. Finally, we discuss future research opportunities on edge intelligence.
We believe that this survey will elicit escalating attentions, stimulate
fruitful discussions and inspire further research ideas on edge intelligence.Comment: Zhi Zhou, Xu Chen, En Li, Liekang Zeng, Ke Luo, and Junshan Zhang,
"Edge Intelligence: Paving the Last Mile of Artificial Intelligence with Edge
Computing," Proceedings of the IEE
Edge Intelligence: The Confluence of Edge Computing and Artificial Intelligence
Along with the rapid developments in communication technologies and the surge
in the use of mobile devices, a brand-new computation paradigm, Edge Computing,
is surging in popularity. Meanwhile, Artificial Intelligence (AI) applications
are thriving with the breakthroughs in deep learning and the many improvements
in hardware architectures. Billions of data bytes, generated at the network
edge, put massive demands on data processing and structural optimization. Thus,
there exists a strong demand to integrate Edge Computing and AI, which gives
birth to Edge Intelligence. In this paper, we divide Edge Intelligence into AI
for edge (Intelligence-enabled Edge Computing) and AI on edge (Artificial
Intelligence on Edge). The former focuses on providing more optimal solutions
to key problems in Edge Computing with the help of popular and effective AI
technologies while the latter studies how to carry out the entire process of
building AI models, i.e., model training and inference, on the edge. This paper
provides insights into this new inter-disciplinary field from a broader
perspective. It discusses the core concepts and the research road-map, which
should provide the necessary background for potential future research
initiatives in Edge Intelligence.Comment: 13 pages, 3 figure
Resource Sharing of a Computing Access Point for Multi-user Mobile Cloud Offloading with Delay Constraints
We consider a mobile cloud computing system with multiple users, a remote
cloud server, and a computing access point (CAP). The CAP serves both as the
network access gateway and a computation service provider to the mobile users.
It can either process the received tasks from mobile users or offload them to
the cloud. We jointly optimize the offloading decisions of all users, together
with the allocation of computation and communication resources, to minimize the
overall cost of energy consumption, computation, and maximum delay among users.
The joint optimization problem is formulated as a mixed-integer program. We
show that the problem can be reformulated and transformed into a non-convex
quadratically constrained quadratic program, which is NP-hard in general. We
then propose an efficient solution to this problem by semidefinite relaxation
and a novel randomization mapping method. Furthermore, when there is a strict
delay constraint for processing each user's task, we further propose a
three-step algorithm to guarantee the feasibility and local optimality of the
obtained solution. Our simulation results show that the proposed solutions give
nearly optimal performance under a wide range of parameter settings, and the
addition of a CAP can significantly reduce the cost of multi-user task
offloading compared with conventional mobile cloud computing where only the
remote cloud server is available.Comment: in IEEE Transactions on Mobile Computing, 201
All One Needs to Know about Fog Computing and Related Edge Computing Paradigms: A Complete Survey
With the Internet of Things (IoT) becoming part of our daily life and our
environment, we expect rapid growth in the number of connected devices. IoT is
expected to connect billions of devices and humans to bring promising
advantages for us. With this growth, fog computing, along with its related edge
computing paradigms, such as multi-access edge computing (MEC) and cloudlet,
are seen as promising solutions for handling the large volume of
security-critical and time-sensitive data that is being produced by the IoT. In
this paper, we first provide a tutorial on fog computing and its related
computing paradigms, including their similarities and differences. Next, we
provide a taxonomy of research topics in fog computing, and through a
comprehensive survey, we summarize and categorize the efforts on fog computing
and its related computing paradigms. Finally, we provide challenges and future
directions for research in fog computing.Comment: 48 pages, 7 tables, 11 figures, 450 references. The data (categories
and features/objectives of the papers) of this survey are now available
publicly. Accepted by Elsevier Journal of Systems Architectur
Bi-Directional Mission Offloading for Agile Space-Air-Ground Integrated Networks
Space-air-ground integrated networks (SAGIN) provide great strengths in
extending the capability of ground wireless networks. On the other hand, with
rich spectrum and computing resources, the ground networks can also assist
space-air networks to accomplish resource-intensive or power-hungry missions,
enhancing the capability and sustainability of the space-air networks.
Therefore, bi-directional mission offloading can make full use of the
advantages of SAGIN and benefits both space-air and ground networks. In this
article, we identify the key role of network reconfiguration in coordinating
heterogeneous resources in SAGIN, and study how network function virtualization
(NFV) and service function chaining (SFC) enable agile mission offloading. A
case study validates the performance gain brought by bi-directional mission
offloading. Future research issues are outlooked as the bi-directional mission
offloading framework opens a new trail in releasing the full potentials of
SAGIN.Comment: accepted by IEEE Wireless Communications Magazin
Applications of Deep Reinforcement Learning in Communications and Networking: A Survey
This paper presents a comprehensive literature review on applications of deep
reinforcement learning in communications and networking. Modern networks, e.g.,
Internet of Things (IoT) and Unmanned Aerial Vehicle (UAV) networks, become
more decentralized and autonomous. In such networks, network entities need to
make decisions locally to maximize the network performance under uncertainty of
network environment. Reinforcement learning has been efficiently used to enable
the network entities to obtain the optimal policy including, e.g., decisions or
actions, given their states when the state and action spaces are small.
However, in complex and large-scale networks, the state and action spaces are
usually large, and the reinforcement learning may not be able to find the
optimal policy in reasonable time. Therefore, deep reinforcement learning, a
combination of reinforcement learning with deep learning, has been developed to
overcome the shortcomings. In this survey, we first give a tutorial of deep
reinforcement learning from fundamental concepts to advanced models. Then, we
review deep reinforcement learning approaches proposed to address emerging
issues in communications and networking. The issues include dynamic network
access, data rate control, wireless caching, data offloading, network security,
and connectivity preservation which are all important to next generation
networks such as 5G and beyond. Furthermore, we present applications of deep
reinforcement learning for traffic routing, resource sharing, and data
collection. Finally, we highlight important challenges, open issues, and future
research directions of applying deep reinforcement learning.Comment: 37 pages, 13 figures, 6 tables, 174 reference paper
Application Management in Fog Computing Environments: A Taxonomy, Review and Future Directions
The Internet of Things (IoT) paradigm is being rapidly adopted for the
creation of smart environments in various domains. The IoT-enabled
Cyber-Physical Systems (CPSs) associated with smart city, healthcare, Industry
4.0 and Agtech handle a huge volume of data and require data processing
services from different types of applications in real-time. The Cloud-centric
execution of IoT applications barely meets such requirements as the Cloud
datacentres reside at a multi-hop distance from the IoT devices. \textit{Fog
computing}, an extension of Cloud at the edge network, can execute these
applications closer to data sources. Thus, Fog computing can improve
application service delivery time and resist network congestion. However, the
Fog nodes are highly distributed, heterogeneous and most of them are
constrained in resources and spatial sharing. Therefore, efficient management
of applications is necessary to fully exploit the capabilities of Fog nodes. In
this work, we investigate the existing application management strategies in Fog
computing and review them in terms of architecture, placement and maintenance.
Additionally, we propose a comprehensive taxonomy and highlight the research
gaps in Fog-based application management. We also discuss a perspective model
and provide future research directions for further improvement of application
management in Fog computing
Mobile Edge Cloud: Opportunities and Challenges
Mobile edge cloud is emerging as a promising technology to the internet of
things and cyber-physical system applications such as smart home and
intelligent video surveillance. In a smart home, various sensors are deployed
to monitor the home environment and physiological health of individuals. The
data collected by sensors are sent to an application, where numerous algorithms
for emotion and sentiment detection, activity recognition and situation
management are applied to provide healthcare- and emergency-related services
and to manage resources at the home. The executions of these algorithms require
a vast amount of computing and storage resources. To address the issue, the
conventional approach is to send the collected data to an application on an
internet cloud. This approach has several problems such as high communication
latency, communication energy consumption and unnecessary data traffic to the
core network. To overcome the drawbacks of the conventional cloud-based
approach, a new system called mobile edge cloud is proposed. In mobile edge
cloud, multiple mobiles and stationary devices interconnected through wireless
local area networks are combined to create a small cloud infrastructure at a
local physical area such as a home. Compared to traditional mobile distributed
computing systems, mobile edge cloud introduces several complex challenges due
to the heterogeneous computing environment, heterogeneous and dynamic network
environment, node mobility, and limited battery power. The real-time
requirements associated with the internet of things and cyber-physical system
applications make the problem even more challenging. In this paper, we describe
the applications and challenges associated with the design and development of
mobile edge cloud system and propose an architecture based on a cross layer
design approach for effective decision making.Comment: 4th Annual Conference on Computational Science and Computational
Intelligence, December 14-16, 2017, Las Vegas, Nevada, USA. arXiv admin note:
text overlap with arXiv:1810.0704
- …