897 research outputs found
Service Capacity Enhanced Task Offloading and Resource Allocation in Multi-Server Edge Computing Environment
An edge computing environment features multiple edge servers and multiple
service clients. In this environment, mobile service providers can offload
client-side computation tasks from service clients' devices onto edge servers
to reduce service latency and power consumption experienced by the clients. A
critical issue that has yet to be properly addressed is how to allocate edge
computing resources to achieve two optimization objectives: 1) minimize the
service cost measured by the service latency and the power consumption
experienced by service clients; and 2) maximize the service capacity measured
by the number of service clients that can offload their computation tasks in
the long term. This paper formulates this long-term problem as a stochastic
optimization problem and solves it with an online algorithm based on Lyapunov
optimization. This NP-hard problem is decomposed into three sub-problems, which
are then solved with a suite of techniques. The experimental results show that
our approach significantly outperforms two baseline approaches.Comment: This paper has been accepted by Early Submission Phase of ICWS201
Energy-Efficient Joint Offloading and Wireless Resource Allocation Strategy in Multi-MEC Server Systems
Mobile edge computing (MEC) is an emerging paradigm that mobile devices can
offload the computation-intensive or latency-critical tasks to the nearby MEC
servers, so as to save energy and extend battery life. Unlike the cloud server,
MEC server is a small-scale data center deployed at a wireless access point,
thus it is highly sensitive to both radio and computing resource. In this
paper, we consider an Orthogonal Frequency-Division Multiplexing Access (OFDMA)
based multi-user and multi-MEC-server system, where the task offloading
strategies and wireless resources allocation are jointly investigated. Aiming
at minimizing the total energy consumption, we propose the joint offloading and
resource allocation strategy for latency-critical applications. Through the
bi-level optimization approach, the original NP-hard problem is decoupled into
the lower-level problem seeking for the allocation of power and subcarrier and
the upper-level task offloading problem. Simulation results show that the
proposed algorithm achieves excellent performance in energy saving and
successful offloading probability (SOP) in comparison with conventional
schemes.Comment: 6 pages, 5 figures, to appear in IEEE ICC 2018, May 20-2
Aqua Computing: Coupling Computing and Communications
The authors introduce a new vision for providing computing services for
connected devices. It is based on the key concept that future computing
resources will be coupled with communication resources, for enhancing user
experience of the connected users, and also for optimising resources in the
providers' infrastructures. Such coupling is achieved by Joint/Cooperative
resource allocation algorithms, by integrating computing and communication
services and by integrating hardware in networks. Such type of computing, by
which computing services are not delivered independently but dependent of
networking services, is named Aqua Computing. The authors see Aqua Computing as
a novel approach for delivering computing resources to end devices, where
computing power of the devices are enhanced automatically once they are
connected to an Aqua Computing enabled network. The process of resource
coupling is named computation dissolving. Then, an Aqua Computing architecture
is proposed for mobile edge networks, in which computing and wireless
networking resources are allocated jointly or cooperatively by a Mobile Cloud
Controller, for the benefit of the end-users and/or for the benefit of the
service providers. Finally, a working prototype of the system is shown and the
gathered results show the performance of the Aqua Computing prototype.Comment: A shorter version of this paper will be submitted to an IEEE magazin
Mobile Edge Computation Offloading Using Game Theory and Reinforcement Learning
Due to the ever-increasing popularity of resource-hungry and
delay-constrained mobile applications, the computation and storage capabilities
of remote cloud has partially migrated towards the mobile edge, giving rise to
the concept known as Mobile Edge Computing (MEC). While MEC servers enjoy the
close proximity to the end-users to provide services at reduced latency and
lower energy costs, they suffer from limitations in computational and radio
resources, which calls for fair efficient resource management in the MEC
servers. The problem is however challenging due to the ultra-high density,
distributed nature, and intrinsic randomness of next generation wireless
networks. In this article, we focus on the application of game theory and
reinforcement learning for efficient distributed resource management in MEC, in
particular, for computation offloading. We briefly review the cutting-edge
research and discuss future challenges. Furthermore, we develop a
game-theoretical model for energy-efficient distributed edge server activation
and study several learning techniques. Numerical results are provided to
illustrate the performance of these distributed learning techniques. Also, open
research issues in the context of resource management in MEC servers are
discussed
Edge Intelligence: The Confluence of Edge Computing and Artificial Intelligence
Along with the rapid developments in communication technologies and the surge
in the use of mobile devices, a brand-new computation paradigm, Edge Computing,
is surging in popularity. Meanwhile, Artificial Intelligence (AI) applications
are thriving with the breakthroughs in deep learning and the many improvements
in hardware architectures. Billions of data bytes, generated at the network
edge, put massive demands on data processing and structural optimization. Thus,
there exists a strong demand to integrate Edge Computing and AI, which gives
birth to Edge Intelligence. In this paper, we divide Edge Intelligence into AI
for edge (Intelligence-enabled Edge Computing) and AI on edge (Artificial
Intelligence on Edge). The former focuses on providing more optimal solutions
to key problems in Edge Computing with the help of popular and effective AI
technologies while the latter studies how to carry out the entire process of
building AI models, i.e., model training and inference, on the edge. This paper
provides insights into this new inter-disciplinary field from a broader
perspective. It discusses the core concepts and the research road-map, which
should provide the necessary background for potential future research
initiatives in Edge Intelligence.Comment: 13 pages, 3 figure
Intelligent networking with Mobile Edge Computing: Vision and Challenges for Dynamic Network Scheduling
Mobile edge computing (MEC) has been considered as a promising technique for
internet of things (IoT). By deploying edge servers at the proximity of
devices, it is expected to provide services and process data at a relatively
low delay by intelligent networking. However, the vast edge servers may face
great challenges in terms of cooperation and resource allocation. Furthermore,
intelligent networking requires online implementation in distributed mode. In
such kinds of systems, the network scheduling can not follow any previously
known rule due to complicated application environment. Then statistical
learning rises up as a promising technique for network scheduling, where edges
dynamically learn environmental elements with cooperations. It is expected such
learning based methods may relieve deficiency of model limitations, which
enhance their practical use in dynamic network scheduling. In this paper, we
investigate the vision and challenges of the intelligent IoT networking with
mobile edge computing. From the systematic viewpoint, some major research
opportunities are enumerated with respect to statistical learning
The edge cloud: A holistic view of communication, computation and caching
The evolution of communication networks shows a clear shift of focus from
just improving the communications aspects to enabling new important services,
from Industry 4.0 to automated driving, virtual/augmented reality, Internet of
Things (IoT), and so on. This trend is evident in the roadmap planned for the
deployment of the fifth generation (5G) communication networks. This ambitious
goal requires a paradigm shift towards a vision that looks at communication,
computation and caching (3C) resources as three components of a single holistic
system. The further step is to bring these 3C resources closer to the mobile
user, at the edge of the network, to enable very low latency and high
reliability services. The scope of this chapter is to show that signal
processing techniques can play a key role in this new vision. In particular, we
motivate the joint optimization of 3C resources. Then we show how graph-based
representations can play a key role in building effective learning methods and
devising innovative resource allocation techniques.Comment: to appear in the book "Cooperative and Graph Signal Pocessing:
Principles and Applications", P. Djuric and C. Richard Eds., Academic Press,
Elsevier, 201
Air-Ground Integrated Mobile Edge Networks: Architecture, Challenges and Opportunities
The ever-increasing mobile data demands have posed significant challenges in
the current radio access networks, while the emerging computation-heavy
Internet of things (IoT) applications with varied requirements demand more
flexibility and resilience from the cloud/edge computing architecture. In this
article, to address the issues, we propose a novel air-ground integrated mobile
edge network (AGMEN), where UAVs are flexibly deployed and scheduled, and
assist the communication, caching, and computing of the edge network. In
specific, we present the detailed architecture of AGMEN, and investigate the
benefits and application scenarios of drone-cells, and UAV-assisted edge
caching and computing. Furthermore, the challenging issues in AGMEN are
discussed, and potential research directions are highlighted.Comment: Accepted by IEEE Communications Magazine. 5 figure
Fog Computing: A Taxonomy, Survey and Future Directions
In recent years, the number of Internet of Things (IoT) devices/sensors has
increased to a great extent. To support the computational demand of real-time
latency-sensitive applications of largely geo-distributed IoT devices/sensors,
a new computing paradigm named "Fog computing" has been introduced. Generally,
Fog computing resides closer to the IoT devices/sensors and extends the
Cloud-based computing, storage and networking facilities. In this chapter, we
comprehensively analyse the challenges in Fogs acting as an intermediate layer
between IoT devices/ sensors and Cloud datacentres and review the current
developments in this field. We present a taxonomy of Fog computing according to
the identified challenges and its key features.We also map the existing works
to the taxonomy in order to identify current research gaps in the area of Fog
computing. Moreover, based on the observations, we propose future directions
for research
Mobile Edge Intelligence and Computing for the Internet of Vehicles
The Internet of Vehicles (IoV) is an emerging paradigm, driven by recent
advancements in vehicular communications and networking. Advances in research
can now provide reliable communication links between vehicles, via
vehicle-to-vehicle communications, and between vehicles and roadside
infrastructures, via vehicle-to-infrastructure communications. Meanwhile, the
capability and intelligence of vehicles are being rapidly enhanced, and this
will have the potential of supporting a plethora of new exciting applications,
which will integrate fully autonomous vehicles, the Internet of Things (IoT),
and the environment. These trends will bring about an era of intelligent IoV,
which will heavily depend upon communications, computing, and data analytics
technologies. To store and process the massive amount of data generated by
intelligent IoV, onboard processing and Cloud computing will not be sufficient,
due to resource/power constraints and communication overhead/latency,
respectively. By deploying storage and computing resources at the wireless
network edge, e.g., radio access points, the edge information system (EIS),
including edge caching, edge computing, and edge AI, will play a key role in
the future intelligent IoV. Such system will provide not only low-latency
content delivery and computation services, but also localized data acquisition,
aggregation and processing. This article surveys the latest development in EIS
for intelligent IoV. Key design issues, methodologies and hardware platforms
are introduced. In particular, typical use cases for intelligent vehicles are
illustrated, including edge-assisted perception, mapping, and localization. In
addition, various open research problems are identified.Comment: 18 pages, 6 figures, submitted to Proceedings of the IEE
- …