936 research outputs found
Joint Optimal Software Caching, Computation Offloading and Communications Resource Allocation for Mobile Edge Computing
As software may be used by multiple users, caching popular software at the
wireless edge has been considered to save computation and communications
resources for mobile edge computing (MEC). However, fetching uncached software
from the core network and multicasting popular software to users have so far
been ignored. Thus, existing design is incomplete and less practical. In this
paper, we propose a joint caching, computation and communications mechanism
which involves software fetching, caching and multicasting, as well as task
input data uploading, task executing (with non-negligible time duration) and
computation result downloading, and mathematically characterize it. Then, we
optimize the joint caching, offloading and time allocation policy to minimize
the weighted sum energy consumption subject to the caching and deadline
constraints. The problem is a challenging two-timescale mixed integer nonlinear
programming (MINLP) problem, and is NP-hard in general. We convert it into an
equivalent convex MINLP problem by using some appropriate transformations and
propose two low-complexity algorithms to obtain suboptimal solutions of the
original non-convex MINLP problem. Specifically, the first suboptimal solution
is obtained by solving a relaxed convex problem using the consensus alternating
direction method of multipliers (ADMM), and then rounding its optimal solution
properly. The second suboptimal solution is proposed by obtaining a stationary
point of an equivalent difference of convex (DC) problem using the penalty
convex-concave procedure (Penalty-CCP) and ADMM. Finally, by numerical results,
we show that the proposed solutions outperform existing schemes and reveal
their advantages in efficiently utilizing storage, computation and
communications resources.Comment: To appear in IEEE Trans. Veh. Technol., 202
Edge Intelligence: The Confluence of Edge Computing and Artificial Intelligence
Along with the rapid developments in communication technologies and the surge
in the use of mobile devices, a brand-new computation paradigm, Edge Computing,
is surging in popularity. Meanwhile, Artificial Intelligence (AI) applications
are thriving with the breakthroughs in deep learning and the many improvements
in hardware architectures. Billions of data bytes, generated at the network
edge, put massive demands on data processing and structural optimization. Thus,
there exists a strong demand to integrate Edge Computing and AI, which gives
birth to Edge Intelligence. In this paper, we divide Edge Intelligence into AI
for edge (Intelligence-enabled Edge Computing) and AI on edge (Artificial
Intelligence on Edge). The former focuses on providing more optimal solutions
to key problems in Edge Computing with the help of popular and effective AI
technologies while the latter studies how to carry out the entire process of
building AI models, i.e., model training and inference, on the edge. This paper
provides insights into this new inter-disciplinary field from a broader
perspective. It discusses the core concepts and the research road-map, which
should provide the necessary background for potential future research
initiatives in Edge Intelligence.Comment: 13 pages, 3 figure
Mobile Edge Computation Offloading Using Game Theory and Reinforcement Learning
Due to the ever-increasing popularity of resource-hungry and
delay-constrained mobile applications, the computation and storage capabilities
of remote cloud has partially migrated towards the mobile edge, giving rise to
the concept known as Mobile Edge Computing (MEC). While MEC servers enjoy the
close proximity to the end-users to provide services at reduced latency and
lower energy costs, they suffer from limitations in computational and radio
resources, which calls for fair efficient resource management in the MEC
servers. The problem is however challenging due to the ultra-high density,
distributed nature, and intrinsic randomness of next generation wireless
networks. In this article, we focus on the application of game theory and
reinforcement learning for efficient distributed resource management in MEC, in
particular, for computation offloading. We briefly review the cutting-edge
research and discuss future challenges. Furthermore, we develop a
game-theoretical model for energy-efficient distributed edge server activation
and study several learning techniques. Numerical results are provided to
illustrate the performance of these distributed learning techniques. Also, open
research issues in the context of resource management in MEC servers are
discussed
A Parallel Optimal Task Allocation Mechanism for Large-Scale Mobile Edge Computing
We consider the problem of intelligent and efficient task allocation
mechanism in large-scale mobile edge computing (MEC), which can reduce delay
and energy consumption in a parallel and distributed optimization. In this
paper, we study the joint optimization model to consider cooperative task
management mechanism among mobile terminals (MT), macro cell base station
(MBS), and multiple small cell base station (SBS) for large-scale MEC
applications. We propose a parallel multi-block Alternating Direction Method of
Multipliers (ADMM) based method to model both requirements of low delay and low
energy consumption in the MEC system which formulates the task allocation under
those requirements as a nonlinear 0-1 integer programming problem. To solve the
optimization problem, we develop an efficient combination of conjugate
gradient, Newton and linear search techniques based algorithm with Logarithmic
Smoothing (for global variables updating) and the Cyclic Block coordinate
Gradient Projection (CBGP, for local variables updating) methods, which can
guarantee convergence and reduce computational complexity with a good
scalability. Numerical results demonstrate the effectiveness of the proposed
mechanism and it can effectively reduce delay and energy consumption for a
large-scale MEC system.Comment: 15 pages,4 figures, resource management for large-scale MEC. arXiv
admin note: text overlap with arXiv:2003.1284
Air-Ground Integrated Mobile Edge Networks: Architecture, Challenges and Opportunities
The ever-increasing mobile data demands have posed significant challenges in
the current radio access networks, while the emerging computation-heavy
Internet of things (IoT) applications with varied requirements demand more
flexibility and resilience from the cloud/edge computing architecture. In this
article, to address the issues, we propose a novel air-ground integrated mobile
edge network (AGMEN), where UAVs are flexibly deployed and scheduled, and
assist the communication, caching, and computing of the edge network. In
specific, we present the detailed architecture of AGMEN, and investigate the
benefits and application scenarios of drone-cells, and UAV-assisted edge
caching and computing. Furthermore, the challenging issues in AGMEN are
discussed, and potential research directions are highlighted.Comment: Accepted by IEEE Communications Magazine. 5 figure
Heterogeneous Services Provisioning in Small Cell Networks with Cache and Mobile Edge Computing
In the area of full duplex (FD)-enabled small cell networks, limited works
have been done on consideration of cache and mobile edge communication (MEC).
In this paper, a virtual FD-enabled small cell network with cache and MEC is
investigated for two heterogeneous services, high-data-rate service and
computation-sensitive service. In our proposed scheme, content caching and FD
communication are closely combined to offer high-data-rate services without the
cost of backhaul resource. Computing offloading is conducted to guarantee the
delay requirement of users. Then we formulate a virtual resource allocation
problem, in which user association, power control, caching and computing
offloading policies and resource allocation are jointly considered. Since the
original problem is a mixed combinatorial problem, necessary variables
relaxation and reformulation are conducted to transfer the original problem to
a convex problem. Furthermore, alternating direction method of multipliers
(ADMM) algorithm is adopted to obtain the optimal solution. Finally, extensive
simulations are conducted with different system configurations to verify the
effectiveness of the proposed scheme
The edge cloud: A holistic view of communication, computation and caching
The evolution of communication networks shows a clear shift of focus from
just improving the communications aspects to enabling new important services,
from Industry 4.0 to automated driving, virtual/augmented reality, Internet of
Things (IoT), and so on. This trend is evident in the roadmap planned for the
deployment of the fifth generation (5G) communication networks. This ambitious
goal requires a paradigm shift towards a vision that looks at communication,
computation and caching (3C) resources as three components of a single holistic
system. The further step is to bring these 3C resources closer to the mobile
user, at the edge of the network, to enable very low latency and high
reliability services. The scope of this chapter is to show that signal
processing techniques can play a key role in this new vision. In particular, we
motivate the joint optimization of 3C resources. Then we show how graph-based
representations can play a key role in building effective learning methods and
devising innovative resource allocation techniques.Comment: to appear in the book "Cooperative and Graph Signal Pocessing:
Principles and Applications", P. Djuric and C. Richard Eds., Academic Press,
Elsevier, 201
Stacked Auto Encoder Based Deep Reinforcement Learning for Online Resource Scheduling in Large-Scale MEC Networks
An online resource scheduling framework is proposed for minimizing the sum of weighted task latency for all the Internet-of-Things (IoT) users, by optimizing offloading decision, transmission power, and resource allocation in the large-scale mobile-edge computing (MEC) system. Toward this end, a deep reinforcement learning (DRL)-based solution is proposed, which includes the following components. First, a related and regularized stacked autoencoder (2r-SAE) with unsupervised learning is applied to perform data compression and representation for high-dimensional channel quality information (CQI) data, which can reduce the state space for DRL. Second, we present an adaptive simulated annealing approach (ASA) as the action search method of DRL, in which an adaptive h -mutation is used to guide the search direction and an adaptive iteration is proposed to enhance the search efficiency during the DRL process. Third, a preserved and prioritized experience replay (2p-ER) is introduced to assist the DRL to train the policy network and find the optimal offloading policy. The numerical results are provided to demonstrate that the proposed algorithm can achieve near-optimal performance while significantly decreasing the computational time compared with existing benchmarks
Mobile Edge Computing for Future Internet-of-Things
University of Technology Sydney. Faculty of Engineering and Information Technology.Integrating sensors, the Internet, and wireless systems, Internet-of-Things (IoT) provides a new paradigm of ubiquitous connectivity and pervasive intelligence. The key enabling technology underlying IoT is mobile edge computing (MEC), which is anticipated to realize and reap the promising benefits of IoT applications by placing various cloud resources, such as computing and storage resources closer to smart devices and objects. Challenges of designing efficient and scalable MEC platforms for future IoT arise from the physical limitations of computing and battery resources of IoT devices, heterogeneity of computing and wireless communication capabilities of IoT networks, large volume of data arrivals and massive number connections, and large-scale data storage and delivery across the edge network. To address these challenges, this thesis proposes four efficient and scalable task offloading and cooperative caching approaches are proposed.
Firstly, for the multi-user single-cell MEC scenario, the base station (BS) can only have outdated knowledge of IoT device channel conditions due to the time-varying nature of practical wireless channels. To this end, a hybrid learning approach is proposed to optimize the real-time local processing and predictive computation offloading decisions in a distributed manner.
Secondly, for the multi-user multi-cell MEC scenario, an energy-efficient resource management approach is developed based on distributed online learning to tackle the heterogeneity of computing and wireless transmission capabilities of edge servers and IoT devices. The proposed approach optimizes the decisions on task offloading, processing, and result delivery between edge servers and IoT devices to minimize the time-average energy consumption of MEC.
Thirdly, for the computing resource allocation under large-scale network, a distributed online collaborative computing approach is proposed based on Lyapunov optimization for data analysis in IoT application to minimize the time-average energy consumption of network.
Finally, for the storage resource allocation under large-scale network, a distributed IoT data delivery approach based on online learning is proposed for caching application in mobile applications. A new profitable cooperative region is established for every IoT data request admitted at an edge server, to avoid invalid request dispatching
Edge Intelligence: Paving the Last Mile of Artificial Intelligence with Edge Computing
With the breakthroughs in deep learning, the recent years have witnessed a
booming of artificial intelligence (AI) applications and services, spanning
from personal assistant to recommendation systems to video/audio surveillance.
More recently, with the proliferation of mobile computing and
Internet-of-Things (IoT), billions of mobile and IoT devices are connected to
the Internet, generating zillions Bytes of data at the network edge. Driving by
this trend, there is an urgent need to push the AI frontiers to the network
edge so as to fully unleash the potential of the edge big data. To meet this
demand, edge computing, an emerging paradigm that pushes computing tasks and
services from the network core to the network edge, has been widely recognized
as a promising solution. The resulted new inter-discipline, edge AI or edge
intelligence, is beginning to receive a tremendous amount of interest. However,
research on edge intelligence is still in its infancy stage, and a dedicated
venue for exchanging the recent advances of edge intelligence is highly desired
by both the computer system and artificial intelligence communities. To this
end, we conduct a comprehensive survey of the recent research efforts on edge
intelligence. Specifically, we first review the background and motivation for
artificial intelligence running at the network edge. We then provide an
overview of the overarching architectures, frameworks and emerging key
technologies for deep learning model towards training/inference at the network
edge. Finally, we discuss future research opportunities on edge intelligence.
We believe that this survey will elicit escalating attentions, stimulate
fruitful discussions and inspire further research ideas on edge intelligence.Comment: Zhi Zhou, Xu Chen, En Li, Liekang Zeng, Ke Luo, and Junshan Zhang,
"Edge Intelligence: Paving the Last Mile of Artificial Intelligence with Edge
Computing," Proceedings of the IEE
- …