8,044 research outputs found
A Survey on Mobile Edge Networks: Convergence of Computing, Caching and Communications
As the explosive growth of smart devices and the advent of many new
applications, traffic volume has been growing exponentially. The traditional
centralized network architecture cannot accommodate such user demands due to
heavy burden on the backhaul links and long latency. Therefore, new
architectures which bring network functions and contents to the network edge
are proposed, i.e., mobile edge computing and caching. Mobile edge networks
provide cloud computing and caching capabilities at the edge of cellular
networks. In this survey, we make an exhaustive review on the state-of-the-art
research efforts on mobile edge networks. We first give an overview of mobile
edge networks including definition, architecture and advantages. Next, a
comprehensive survey of issues on computing, caching and communication
techniques at the network edge is presented respectively. The applications and
use cases of mobile edge networks are discussed. Subsequently, the key enablers
of mobile edge networks such as cloud technology, SDN/NFV and smart devices are
discussed. Finally, open research challenges and future directions are
presented as well
Performance Analysis and Modeling of Video Transcoding Using Heterogeneous Cloud Services
High-quality video streaming, either in form of Video-On-Demand (VOD) or live
streaming, usually requires converting (ie, transcoding) video streams to match
the characteristics of viewers' devices (eg, in terms of spatial resolution or
supported formats). Considering the computational cost of the transcoding
operation and the surge in video streaming demands, Streaming Service Providers
(SSPs) are becoming reliant on cloud services to guarantee Quality of Service
(QoS) of streaming for their viewers. Cloud providers offer heterogeneous
computational services in form of different types of Virtual Machines (VMs)
with diverse prices. Effective utilization of cloud services for video
transcoding requires detailed performance analysis of different video
transcoding operations on the heterogeneous cloud VMs. In this research, for
the first time, we provide a thorough analysis of the performance of the video
stream transcoding on heterogeneous cloud VMs. Providing such analysis is
crucial for efficient prediction of transcoding time on heterogeneous VMs and
for the functionality of any scheduling methods tailored for video transcoding.
Based upon the findings of this analysis and by considering the cost difference
of heterogeneous cloud VMs, in this research, we also provide a model to
quantify the degree of suitability of each cloud VM type for various
transcoding tasks. The provided model can supply resource (VM) provisioning
methods with accurate performance and cost trade-offs to efficiently utilize
cloud services for video streaming.Comment: 15 page
A Survey on Low Latency Towards 5G: RAN, Core Network and Caching Solutions
The fifth generation (5G) wireless network technology is to be standardized
by 2020, where main goals are to improve capacity, reliability, and energy
efficiency, while reducing latency and massively increasing connection density.
An integral part of 5G is the capability to transmit touch perception type
real-time communication empowered by applicable robotics and haptics equipment
at the network edge. In this regard, we need drastic changes in network
architecture including core and radio access network (RAN) for achieving
end-to-end latency on the order of 1 ms. In this paper, we present a detailed
survey on the emerging technologies to achieve low latency communications
considering three different solution domains: RAN, core network, and caching.
We also present a general overview of 5G cellular networks composed of software
defined network (SDN), network function virtualization (NFV), caching, and
mobile edge computing (MEC) capable of meeting latency and other 5G
requirements.Comment: Accepted in IEEE Communications Surveys and Tutorial
Programming Cloud Resource Orchestration Framework: Operations and Research Challenges
The emergence of cloud computing over the past five years is potentially one
of the breakthrough advances in the history of computing. It delivers hardware
and software resources as virtualization-enabled services and in which
administrators are free from the burden of worrying about the low level
implementation or system administration details. Although cloud computing
offers considerable opportunities for the users (e.g. application developers,
governments, new startups, administrators, consultants, scientists, business
analyst, etc.) such as no up-front investment, lowering operating cost, and
infinite scalability, it has many unique research challenges that need to be
carefully addressed in the future. In this paper, we present a survey on key
cloud computing concepts, resource abstractions, and programming operations for
orchestrating resources and associated research challenges, wherever
applicable.Comment: 19 page
Base Station ON-OFF Switching in 5G Wireless Networks: Approaches and Challenges
To achieve the expected 1000x data rates under the exponential growth of
traffic demand, a large number of base stations (BS) or access points (AP) will
be deployed in the fifth generation (5G) wireless systems, to support high data
rate services and to provide seamless coverage. Although such BSs are expected
to be small-scale with lower power, the aggregated energy consumption of all
BSs would be remarkable, resulting in increased environmental and economic
concerns. In existing cellular networks, turning off the under-utilized BSs is
an efficient approach to conserve energy while preserving the quality of
service (QoS) of mobile users. However, in 5G systems with new physical layer
techniques and the highly heterogeneous network architecture, new challenges
arise in the design of BS ON-OFF switching strategies. In this article, we
begin with a discussion on the inherent technical challenges of BS ON-OFF
switching. We then provide a comprehensive review of recent advances on
switching mechanisms in different application scenarios. Finally, we present
open research problems and conclude the paper.Comment: Appear to IEEE Wireless Communications, 201
A Dynamic Service-Migration Mechanism in Edge Cognitive Computing
Driven by the vision of edge computing and the success of rich cognitive
services based on artificial intelligence, a new computing paradigm, edge
cognitive computing (ECC), is a promising approach that applies cognitive
computing at the edge of the network. ECC has the potential to provide the
cognition of users and network environmental information, and further to
provide elastic cognitive computing services to achieve a higher energy
efficiency and a higher Quality of Experience (QoE) compared to edge computing.
This paper firstly introduces our architecture of the ECC and then describes
its design issues in detail. Moreover, we propose an ECC-based dynamic service
migration mechanism to provide an insight into how cognitive computing is
combined with edge computing. In order to evaluate the proposed mechanism, a
practical platform for dynamic service migration is built up, where the
services are migrated based on the behavioral cognition of a mobile user. The
experimental results show that the proposed ECC architecture has ultra-low
latency and a high user experience, while providing better service to the user,
saving computing resources, and achieving a high energy efficiency
All One Needs to Know about Fog Computing and Related Edge Computing Paradigms: A Complete Survey
With the Internet of Things (IoT) becoming part of our daily life and our
environment, we expect rapid growth in the number of connected devices. IoT is
expected to connect billions of devices and humans to bring promising
advantages for us. With this growth, fog computing, along with its related edge
computing paradigms, such as multi-access edge computing (MEC) and cloudlet,
are seen as promising solutions for handling the large volume of
security-critical and time-sensitive data that is being produced by the IoT. In
this paper, we first provide a tutorial on fog computing and its related
computing paradigms, including their similarities and differences. Next, we
provide a taxonomy of research topics in fog computing, and through a
comprehensive survey, we summarize and categorize the efforts on fog computing
and its related computing paradigms. Finally, we provide challenges and future
directions for research in fog computing.Comment: 48 pages, 7 tables, 11 figures, 450 references. The data (categories
and features/objectives of the papers) of this survey are now available
publicly. Accepted by Elsevier Journal of Systems Architectur
Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A Multi-Agent Deep Reinforcement Learning Approach
In recent years, multi-access edge computing (MEC) is a key enabler for
handling the massive expansion of Internet of Things (IoT) applications and
services. However, energy consumption of a MEC network depends on volatile
tasks that induces risk for energy demand estimations. As an energy supplier, a
microgrid can facilitate seamless energy supply. However, the risk associated
with energy supply is also increased due to unpredictable energy generation
from renewable and non-renewable sources. Especially, the risk of energy
shortfall is involved with uncertainties in both energy consumption and
generation. In this paper, we study a risk-aware energy scheduling problem for
a microgrid-powered MEC network. First, we formulate an optimization problem
considering the conditional value-at-risk (CVaR) measurement for both energy
consumption and generation, where the objective is to minimize the expected
residual of scheduled energy for the MEC networks and we show this problem is
an NP-hard problem. Second, we analyze our formulated problem using a
multi-agent stochastic game that ensures the joint policy Nash equilibrium, and
show the convergence of the proposed model. Third, we derive the solution by
applying a multi-agent deep reinforcement learning (MADRL)-based asynchronous
advantage actor-critic (A3C) algorithm with shared neural networks. This method
mitigates the curse of dimensionality of the state space and chooses the best
policy among the agents for the proposed problem. Finally, the experimental
results establish a significant performance gain by considering CVaR for high
accuracy energy scheduling of the proposed model than both the single and
random agent models.Comment: Accepted Article BY IEEE Transactions on Network and Service
Management, DOI: 10.1109/TNSM.2021.304938
GPU PaaS Computation Model in Aneka Cloud Computing Environment
Due to the surge in the volume of data generated and rapid advancement in
Artificial Intelligence (AI) techniques like machine learning and deep
learning, the existing traditional computing models have become inadequate to
process an enormous volume of data and the complex application logic for
extracting intrinsic information. Computing accelerators such as Graphics
processing units (GPUs) have become de facto SIMD computing system for many big
data and machine learning applications. On the other hand, the traditional
computing model has gradually switched from conventional ownership-based
computing to subscription-based cloud computing model. However, the lack of
programming models and frameworks to develop cloud-native applications in a
seamless manner to utilize both CPU and GPU resources in the cloud has become a
bottleneck for rapid application development. To support this application
demand for simultaneous heterogeneous resource usage, programming models and
new frameworks are needed to manage the underlying resources effectively. Aneka
is emerged as a popular PaaS computing model for the development of Cloud
applications using multiple programming models like Thread, Task, and MapReduce
in a single container .NET platform. Since, Aneka addresses MIMD application
development that uses CPU based resources and GPU programming like CUDA is
designed for SIMD application development, here, the chapter discusses GPU PaaS
computing model for Aneka Clouds for rapid cloud application development for
.NET platforms. The popular opensource GPU libraries are utilized and
integrated it into the existing Aneka task programming model. The scheduling
policies are extended that automatically identify GPU machines and schedule
respective tasks accordingly. A case study on image processing is discussed to
demonstrate the system, which has been built using PaaS Aneka SDKs and CUDA
library.Comment: Submitted as book chapter, under processing, 32 page
iFogSim: A Toolkit for Modeling and Simulation of Resource Management Techniques in Internet of Things, Edge and Fog Computing Environments
Internet of Things (IoT) aims to bring every object (e.g. smart cameras,
wearable, environmental sensors, home appliances, and vehicles) online, hence
generating massive amounts of data that can overwhelm storage systems and data
analytics applications. Cloud computing offers services at the infrastructure
level that can scale to IoT storage and processing requirements. However, there
are applications such as health monitoring and emergency response that require
low latency, and delay caused by transferring data to the cloud and then back
to the application can seriously impact their performances. To overcome this
limitation, Fog computing paradigm has been proposed, where cloud services are
extended to the edge of the network to decrease the latency and network
congestion. To realize the full potential of Fog and IoT paradigms for
real-time analytics, several challenges need to be addressed. The first and
most critical problem is designing resource management techniques that
determine which modules of analytics applications are pushed to each edge
device to minimize the latency and maximize the throughput. To this end, we
need a evaluation platform that enables the quantification of performance of
resource management policies on an IoT or Fog computing infrastructure in a
repeatable manner. In this paper we propose a simulator, called iFogSim, to
model IoT and Fog environments and measure the impact of resource management
techniques in terms of latency, network congestion, energy consumption, and
cost. We describe two case studies to demonstrate modeling of an IoT
environment and comparison of resource management policies. Moreover,
scalability of the simulation toolkit in terms of RAM consumption and execution
time is verified under different circumstances.Comment: Cloud Computing and Distributed Systems Laboratory, The University of
Melbourne, June 6, 201
- …