2,580 research outputs found
ENGINE:Cost Effective Offloading in Mobile Edge Computing with Fog-Cloud Cooperation
Mobile Edge Computing (MEC) as an emerging paradigm utilizing cloudlet or fog
nodes to extend remote cloud computing to the edge of the network, is foreseen
as a key technology towards next generation wireless networks. By offloading
computation intensive tasks from resource constrained mobile devices to fog
nodes or the remote cloud, the energy of mobile devices can be saved and the
computation capability can be enhanced. For fog nodes, they can rent the
resource rich remote cloud to help them process incoming tasks from mobile
devices. In this architecture, the benefit of short computation and computation
delay of mobile devices can be fully exploited. However, existing studies
mostly assume fog nodes possess unlimited computing capacity, which is not
practical, especially when fog nodes are also energy constrained mobile
devices. To provide incentive of fog nodes and reduce the computation cost of
mobile devices, we provide a cost effective offloading scheme in mobile edge
computing with the cooperation between fog nodes and the remote cloud with task
dependency constraint. The mobile devices have limited budget and have to
determine which task should be computed locally or sent to the fog. To address
this issue, we first formulate the offloading problem as a task finish time
inimization problem with given budgets of mobile devices, which is NP-hard. We
then devise two more algorithms to study the network performance. Simulation
results show that the proposed greedy algorithm can achieve the near optimal
performance. On average, the Brute Force method and the greedy algorithm
outperform the simulated annealing algorithm by about 28.13% on the application
finish time.Comment: 10 pages, 9 figures, Technical Repor
Delay constrained Energy Optimization for Edge Cloud Offloading
Resource limited user-devices may offload computation to a cloud server, in
order to reduce power consumption and lower the execution time. However, to
communicate to the cloud server over a wireless channel, additional energy is
consumed for transmitting the data. Also a delay is introduced for offloading
the data and receiving the response. Therefore, an optimal decision needs to be
made that would reduce the energy consumption, while simultaneously satisfying
the delay constraint. In this paper, we obtain an optimal closed form solution
for these decision variables in a multi-user scenario. Furthermore, we
optimally allocate the cloud server resources to the user devices, and evaluate
the minimum delay that the system can provide, for a given bandwidth and number
of user devices.Comment: Published in ICC workshop 201
Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning
To improve the quality of computation experience for mobile devices,
mobile-edge computing (MEC) is a promising paradigm by providing computing
capabilities in close proximity within a sliced radio access network (RAN),
which supports both traditional communication and MEC services. Nevertheless,
the design of computation offloading policies for a virtual MEC system remains
challenging. Specifically, whether to execute a computation task at the mobile
device or to offload it for MEC server execution should adapt to the
time-varying network dynamics. In this paper, we consider MEC for a
representative mobile user in an ultra-dense sliced RAN, where multiple base
stations (BSs) are available to be selected for computation offloading. The
problem of solving an optimal computation offloading policy is modelled as a
Markov decision process, where our objective is to maximize the long-term
utility performance whereby an offloading decision is made based on the task
queue state, the energy queue state as well as the channel qualities between MU
and BSs. To break the curse of high dimensionality in state space, we first
propose a double deep Q-network (DQN) based strategic computation offloading
algorithm to learn the optimal policy without knowing a priori knowledge of
network dynamics. Then motivated by the additive structure of the utility
function, a Q-function decomposition technique is combined with the double DQN,
which leads to novel learning algorithm for the solving of stochastic
computation offloading. Numerical experiments show that our proposed learning
algorithms achieve a significant improvement in computation offloading
performance compared with the baseline policies
A Survey on Mobile Edge Networks: Convergence of Computing, Caching and Communications
As the explosive growth of smart devices and the advent of many new
applications, traffic volume has been growing exponentially. The traditional
centralized network architecture cannot accommodate such user demands due to
heavy burden on the backhaul links and long latency. Therefore, new
architectures which bring network functions and contents to the network edge
are proposed, i.e., mobile edge computing and caching. Mobile edge networks
provide cloud computing and caching capabilities at the edge of cellular
networks. In this survey, we make an exhaustive review on the state-of-the-art
research efforts on mobile edge networks. We first give an overview of mobile
edge networks including definition, architecture and advantages. Next, a
comprehensive survey of issues on computing, caching and communication
techniques at the network edge is presented respectively. The applications and
use cases of mobile edge networks are discussed. Subsequently, the key enablers
of mobile edge networks such as cloud technology, SDN/NFV and smart devices are
discussed. Finally, open research challenges and future directions are
presented as well
Joint Offloading and Resource Allocation in Vehicular Edge Computing and Networks
The emergence of computation intensive on-vehicle applications poses a
significant challenge to provide the required computation capacity and maintain
high performance. Vehicular Edge Computing (VEC) is a new computing paradigm
with a high potential to improve vehicular services by offloading
computation-intensive tasks to the VEC servers. Nevertheless, as the
computation resource of each VEC server is limited, offloading may not be
efficient if all vehicles select the same VEC server to offload their tasks. To
address this problem, in this paper, we propose offloading with resource
allocation. We incorporate the communication and computation to derive the task
processing delay. We formulate the problem as a system utility maximization
problem, and then develop a low-complexity algorithm to jointly optimize
offloading decision and resource allocation. Numerical results demonstrate the
superior performance of our Joint Optimization of Selection and Computation
(JOSC) algorithm compared to state of the art solutions
Optimal Task Scheduling in Communication-Constrained Mobile Edge Computing Systems for Wireless Virtual Reality
Mobile edge computing (MEC) is expected to be an effective solution to
deliver 360-degree virtual reality (VR) videos over wireless networks. In
contrast to previous computation-constrained MEC framework, which reduces the
computation-resource consumption at the mobile VR device by increasing the
communication-resource consumption, we develop a communications-constrained MEC
framework to reduce communication-resource consumption by increasing the
computation-resource consumption and exploiting the caching resources at the
mobile VR device in this paper. Specifically, according to the task
modularization, the MEC server can only deliver the components which have not
been stored in the VR device, and then the VR device uses the received
components and the corresponding cached components to construct the task,
resulting in low communication-resource consumption but high delay. The MEC
server can also compute the task by itself to reduce the delay, however, it
consumes more communication-resource due to the delivery of entire task.
Therefore, we then propose a task scheduling strategy to decide which
computation model should the MEC server operates, in order to minimize the
communication-resource consumption under the delay constraint. Finally, we
discuss the tradeoffs between communications, computing, and caching in the
proposed system.Comment: submitted to APCC 201
Wireless Networks for Mobile Edge Computing: Spatial Modeling and Latency Analysis (Extended version)
Next-generation wireless networks will provide users ubiquitous low-latency
computing services using devices at the network edge, called mobile edge
computing (MEC). The key operation of MEC, mobile computation offloading (MCO),
is to offload computation intensive tasks from users. Since each edge device
comprises an access point (AP) and a computer server (CS), a MEC network can be
decomposed as a radio access network (RAN) cascaded with a CS network (CSN).
Based on the architecture, we investigate network constrained latency
performance, namely communication latency (comm-latency) and computation
latency (comp-latency) under the constraints of RAN coverage and CSN stability.
To this end, a spatial random network is modeled featuring random node
distribution, parallel computing, non-orthogonal multiple access, and random
computation-task generation. Given the model and the said network constraints,
we derive the scaling laws of comm-latency and comp-latency with respect to
network-load parameters (density of mobiles and their task-generation rates)
and network-resource parameters (bandwidth, density of APs/CSs, CS computation
rate). Essentially, the analysis involves the interplay of theories of
stochastic geometry, queueing, and parallel computing. Combining the derived
scaling laws quantifies the tradeoffs between the latencies, network coverage
and network stability. The results provide useful guidelines for MEC-network
provisioning and planning by avoiding either of the cascaded RAN or CSN being a
performance bottleneck.Comment: This work has been submitted to the IEEE for possible publicatio
Resource Sharing of a Computing Access Point for Multi-user Mobile Cloud Offloading with Delay Constraints
We consider a mobile cloud computing system with multiple users, a remote
cloud server, and a computing access point (CAP). The CAP serves both as the
network access gateway and a computation service provider to the mobile users.
It can either process the received tasks from mobile users or offload them to
the cloud. We jointly optimize the offloading decisions of all users, together
with the allocation of computation and communication resources, to minimize the
overall cost of energy consumption, computation, and maximum delay among users.
The joint optimization problem is formulated as a mixed-integer program. We
show that the problem can be reformulated and transformed into a non-convex
quadratically constrained quadratic program, which is NP-hard in general. We
then propose an efficient solution to this problem by semidefinite relaxation
and a novel randomization mapping method. Furthermore, when there is a strict
delay constraint for processing each user's task, we further propose a
three-step algorithm to guarantee the feasibility and local optimality of the
obtained solution. Our simulation results show that the proposed solutions give
nearly optimal performance under a wide range of parameter settings, and the
addition of a CAP can significantly reduce the cost of multi-user task
offloading compared with conventional mobile cloud computing where only the
remote cloud server is available.Comment: in IEEE Transactions on Mobile Computing, 201
Mobile Edge Computation Offloading Using Game Theory and Reinforcement Learning
Due to the ever-increasing popularity of resource-hungry and
delay-constrained mobile applications, the computation and storage capabilities
of remote cloud has partially migrated towards the mobile edge, giving rise to
the concept known as Mobile Edge Computing (MEC). While MEC servers enjoy the
close proximity to the end-users to provide services at reduced latency and
lower energy costs, they suffer from limitations in computational and radio
resources, which calls for fair efficient resource management in the MEC
servers. The problem is however challenging due to the ultra-high density,
distributed nature, and intrinsic randomness of next generation wireless
networks. In this article, we focus on the application of game theory and
reinforcement learning for efficient distributed resource management in MEC, in
particular, for computation offloading. We briefly review the cutting-edge
research and discuss future challenges. Furthermore, we develop a
game-theoretical model for energy-efficient distributed edge server activation
and study several learning techniques. Numerical results are provided to
illustrate the performance of these distributed learning techniques. Also, open
research issues in the context of resource management in MEC servers are
discussed
Joint Optimization of Radio Resources and Code Partitioning in Mobile Edge Computing
The aim of this paper is to propose a computation offloading strategy for
mobile edge computing. We exploit the concept of call graph, which models a
generic computer program as a set of procedures related to each other through a
weighted directed graph. Our goal is to derive the optimal partition of the
call graph establishing which procedures are to be executed locally or
remotely. The main novelty of our work is that the optimal partition is
obtained jointly with the selection of radio parameters, e.g., transmit power
and constellation size, in order to minimize the energy consumption at the
mobile handset, under a latency constraint taking into account transmit time
and execution time. We consider both single and multi-channel transmission
strategies and we prove that a globally optimal solution can be achieved in
both cases. Finally, we propose a suboptimal strategy aimed at solving a
relaxed version of the original problem in order to tradeoff complexity and
performance of the proposed framework. Finally, several numerical results
illustrate under what conditions in terms of call graph topology, communication
strategy, and computation parameters, the proposed offloading strategy provides
large performance gains.Comment: Submitted to IEEE Transactions on Signal Processin
- …