1,729 research outputs found
A Survey on Mobile Edge Networks: Convergence of Computing, Caching and Communications
As the explosive growth of smart devices and the advent of many new
applications, traffic volume has been growing exponentially. The traditional
centralized network architecture cannot accommodate such user demands due to
heavy burden on the backhaul links and long latency. Therefore, new
architectures which bring network functions and contents to the network edge
are proposed, i.e., mobile edge computing and caching. Mobile edge networks
provide cloud computing and caching capabilities at the edge of cellular
networks. In this survey, we make an exhaustive review on the state-of-the-art
research efforts on mobile edge networks. We first give an overview of mobile
edge networks including definition, architecture and advantages. Next, a
comprehensive survey of issues on computing, caching and communication
techniques at the network edge is presented respectively. The applications and
use cases of mobile edge networks are discussed. Subsequently, the key enablers
of mobile edge networks such as cloud technology, SDN/NFV and smart devices are
discussed. Finally, open research challenges and future directions are
presented as well
Decentralized Computation Offloading Game For Mobile Cloud Computing
Mobile cloud computing is envisioned as a promising approach to augment
computation capabilities of mobile devices for emerging resource-hungry mobile
applications. In this paper, we propose a game theoretic approach for achieving
efficient computation offloading for mobile cloud computing. We formulate the
decentralized computation offloading decision making problem among mobile
device users as a decentralized computation offloading game. We analyze the
structural property of the game and show that the game always admits a Nash
equilibrium. We then design a decentralized computation offloading mechanism
that can achieve a Nash equilibrium of the game and quantify its efficiency
ratio over the centralized optimal solution. Numerical results demonstrate that
the proposed mechanism can achieve efficient computation offloading performance
and scale well as the system size increases.Comment: The paper has been accepted by IEEE Transactions on Parallel and
Distributed Systems (TPDS) Vol. 26, No. 4, pp. 974 - 983, March 201
ENGINE:Cost Effective Offloading in Mobile Edge Computing with Fog-Cloud Cooperation
Mobile Edge Computing (MEC) as an emerging paradigm utilizing cloudlet or fog
nodes to extend remote cloud computing to the edge of the network, is foreseen
as a key technology towards next generation wireless networks. By offloading
computation intensive tasks from resource constrained mobile devices to fog
nodes or the remote cloud, the energy of mobile devices can be saved and the
computation capability can be enhanced. For fog nodes, they can rent the
resource rich remote cloud to help them process incoming tasks from mobile
devices. In this architecture, the benefit of short computation and computation
delay of mobile devices can be fully exploited. However, existing studies
mostly assume fog nodes possess unlimited computing capacity, which is not
practical, especially when fog nodes are also energy constrained mobile
devices. To provide incentive of fog nodes and reduce the computation cost of
mobile devices, we provide a cost effective offloading scheme in mobile edge
computing with the cooperation between fog nodes and the remote cloud with task
dependency constraint. The mobile devices have limited budget and have to
determine which task should be computed locally or sent to the fog. To address
this issue, we first formulate the offloading problem as a task finish time
inimization problem with given budgets of mobile devices, which is NP-hard. We
then devise two more algorithms to study the network performance. Simulation
results show that the proposed greedy algorithm can achieve the near optimal
performance. On average, the Brute Force method and the greedy algorithm
outperform the simulated annealing algorithm by about 28.13% on the application
finish time.Comment: 10 pages, 9 figures, Technical Repor
A Survey on Modeling Energy Consumption of Cloud Applications: Deconstruction, State of the Art, and Trade-off Debates
Given the complexity and heterogeneity in Cloud computing scenarios, the
modeling approach has widely been employed to investigate and analyze the
energy consumption of Cloud applications, by abstracting real-world objects and
processes that are difficult to observe or understand directly. It is clear
that the abstraction sacrifices, and usually does not need, the complete
reflection of the reality to be modeled. Consequently, current energy
consumption models vary in terms of purposes, assumptions, application
characteristics and environmental conditions, with possible overlaps between
different research works. Therefore, it would be necessary and valuable to
reveal the state-of-the-art of the existing modeling efforts, so as to weave
different models together to facilitate comprehending and further investigating
application energy consumption in the Cloud domain. By systematically
selecting, assessing and synthesizing 76 relevant studies, we rationalized and
organized over 30 energy consumption models with unified notations. To help
investigate the existing models and facilitate future modeling work, we
deconstructed the runtime execution and deployment environment of Cloud
applications, and identified 18 environmental factors and 12 workload factors
that would be influential on the energy consumption. In particular, there are
complicated trade-offs and even debates when dealing with the combinational
impacts of multiple factors.Comment: in pres
Efficient Multi-User Computation Offloading for Mobile-Edge Cloud Computing
Mobile-edge cloud computing is a new paradigm to provide cloud computing
capabilities at the edge of pervasive radio access networks in close proximity
to mobile users. In this paper, we first study the multi-user computation
offloading problem for mobile-edge cloud computing in a multi-channel wireless
interference environment. We show that it is NP-hard to compute a centralized
optimal solution, and hence adopt a game theoretic approach for achieving
efficient computation offloading in a distributed manner. We formulate the
distributed computation offloading decision making problem among mobile device
users as a multi-user computation offloading game. We analyze the structural
property of the game and show that the game admits a Nash equilibrium and
possesses the finite improvement property. We then design a distributed
computation offloading algorithm that can achieve a Nash equilibrium, derive
the upper bound of the convergence time, and quantify its efficiency ratio over
the centralized optimal solutions in terms of two important performance
metrics. We further extend our study to the scenario of multi-user computation
offloading in the multi-channel wireless contention environment. Numerical
results corroborate that the proposed algorithm can achieve superior
computation offloading performance and scale well as the user size increases.Comment: The paper has been accepted by IEEE/ACM Transactions on Networking,
Sept. 2015. arXiv admin note: substantial text overlap with arXiv:1404.320
All One Needs to Know about Fog Computing and Related Edge Computing Paradigms: A Complete Survey
With the Internet of Things (IoT) becoming part of our daily life and our
environment, we expect rapid growth in the number of connected devices. IoT is
expected to connect billions of devices and humans to bring promising
advantages for us. With this growth, fog computing, along with its related edge
computing paradigms, such as multi-access edge computing (MEC) and cloudlet,
are seen as promising solutions for handling the large volume of
security-critical and time-sensitive data that is being produced by the IoT. In
this paper, we first provide a tutorial on fog computing and its related
computing paradigms, including their similarities and differences. Next, we
provide a taxonomy of research topics in fog computing, and through a
comprehensive survey, we summarize and categorize the efforts on fog computing
and its related computing paradigms. Finally, we provide challenges and future
directions for research in fog computing.Comment: 48 pages, 7 tables, 11 figures, 450 references. The data (categories
and features/objectives of the papers) of this survey are now available
publicly. Accepted by Elsevier Journal of Systems Architectur
Dynamic Computation Offloading for Mobile-Edge Computing with Energy Harvesting Devices
Mobile-edge computing (MEC) is an emerging paradigm to meet the
ever-increasing computation demands from mobile applications. By offloading the
computationally intensive workloads to the MEC server, the quality of
computation experience, e.g., the execution latency, could be greatly improved.
Nevertheless, as the on-device battery capacities are limited, computation
would be interrupted when the battery energy runs out. To provide satisfactory
computation performance as well as achieving green computing, it is of
significant importance to seek renewable energy sources to power mobile devices
via energy harvesting (EH) technologies. In this paper, we will investigate a
green MEC system with EH devices and develop an effective computation
offloading strategy. The execution cost, which addresses both the execution
latency and task failure, is adopted as the performance metric. A
low-complexity online algorithm, namely, the Lyapunov optimization-based
dynamic computation offloading (LODCO) algorithm is proposed, which jointly
decides the offloading decision, the CPU-cycle frequencies for mobile
execution, and the transmit power for computation offloading. A unique
advantage of this algorithm is that the decisions depend only on the
instantaneous side information without requiring distribution information of
the computation task request, the wireless channel, and EH processes. The
implementation of the algorithm only requires to solve a deterministic problem
in each time slot, for which the optimal solution can be obtained either in
closed form or by bisection search. Moreover, the proposed algorithm is shown
to be asymptotically optimal via rigorous analysis. Sample simulation results
shall be presented to verify the theoretical analysis as well as validate the
effectiveness of the proposed algorithm.Comment: 33 pages, 11 figures, submitted to IEEE Journal on Selected Areas in
Communication
Optimal Task Offloading and Resource Allocation in Mobile-Edge Computing with Inter-user Task Dependency
Mobile-edge computing (MEC) has recently emerged as a cost-effective paradigm
to enhance the computing capability of hardware-constrained wireless devices
(WDs). In this paper, we first consider a two-user MEC network, where each WD
has a sequence of tasks to execute. In particular, we consider task dependency
between the two WDs, where the input of a task at one WD requires the final
task output at the other WD. Under the considered task-dependency model, we
study the optimal task offloading policy and resource allocation (e.g., on
offloading transmit power and local CPU frequencies) that minimize the weighted
sum of the WDs' energy consumption and task execution time. The problem is
challenging due to the combinatorial nature of the offloading decisions among
all tasks and the strong coupling with resource allocation. To tackle this
problem, we first assume that the offloading decisions are given and derive the
closed-form expressions of the optimal offloading transmit power and local CPU
frequencies. Then, an efficient bi-section search method is proposed to obtain
the optimal solutions. Furthermore, we prove that the optimal offloading
decisions follow an one-climb policy, based on which a reduced-complexity Gibbs
Sampling algorithm is proposed to obtain the optimal offloading decisions. We
then extend the investigation to a general multi-user scenario, where the input
of a task at one WD requires the final task outputs from multiple other WDs.
Numerical results show that the proposed method can significantly outperform
the other representative benchmarks and efficiently achieve low complexity with
respect to the call graph size.Comment: This paper has been accepted for publication in IEEE Transactions on
Wireless Communication
Stochastic Control of Computation Offloading to a Helper with a Dynamically Loaded CPU
Due to densification of wireless networks, there exist abundance of idling
computation resources at edge devices. These resources can be scavenged by
offloading heavy computation tasks from small IoT devices in proximity, thereby
overcoming their limitations and lengthening their battery lives. However,
unlike dedicated servers, the spare resources offered by edge helpers are
random and intermittent. Thus, it is essential for a user to intelligently
control the amounts of data for offloading and local computing so as to ensure
a computation task can be finished in time consuming minimum energy. In this
paper, we design energy-efficient control policies in a computation offloading
system with a random channel and a helper with a dynamically loaded CPU.
Specifically, the policy adopted by the helper aims at determining the sizes of
offloaded and locally-computed data for a given task in different slots such
that the total energy consumption for transmission and local CPU is minimized
under a task-deadline constraint. As the result, the polices endow an
offloading user robustness against channel-and-helper randomness besides
balancing offloading and local computing. By modeling the channel and
helper-CPU as Markov chains, the problem of offloading control is converted
into a Markov-decision process. Though dynamic programming (DP) for numerically
solving the problem does not yield the optimal policies in closed form, we
leverage the procedure to quantify the optimal policy structure and apply the
result to design optimal or sub-optimal policies. For different cases ranging
from zero to large buffers, the low-complexity of the policies overcomes the
"curse-of-dimensionality" in DP arising from joint consideration of channel,
helper CPU and buffer states.Comment: This ongoing work has been submitted to the IEEE for possible
publicatio
Multi-Antenna NOMA for Computation Offloading in Multiuser Mobile Edge Computing Systems
This paper studies a multiuser mobile edge computing (MEC) system, in which
one base station (BS) serves multiple users with intensive computation tasks.
We exploit the multi-antenna non-orthogonal multiple access (NOMA) technique
for multiuser computation offloading, such that different users can
simultaneously offload their computation tasks to the multi-antenna BS over the
same time/frequency resources, and the BS can employ successive interference
cancellation (SIC) to efficiently decode all users' offloaded tasks for remote
execution. We aim to minimize the weighted sum-energy consumption at all users
subject to their computation latency constraints, by jointly optimizing the
communication and computation resource allocation as well as the BS's decoding
order for SIC. For the case with partial offloading, the weighted sum-energy
minimization is a convex optimization problem, for which an efficient algorithm
based on the Lagrange duality method is presented to obtain the globally
optimal solution. For the case with binary offloading, the weighted sum-energy
minimization corresponds to a {\em mixed Boolean convex problem} that is
generally more difficult to be solved. We first use the branch-and-bound (BnB)
method to obtain the globally optimal solution, and then develop two
low-complexity algorithms based on the greedy method and the convex relaxation,
respectively, to find suboptimal solutions with high quality in practice. Via
numerical results, it is shown that the proposed NOMA-based computation
offloading design significantly improves the energy efficiency of the multiuser
MEC system as compared to other benchmark schemes. It is also shown that for
the case with binary offloading, the proposed greedy method performs close to
the optimal BnB based solution, and the convex relaxation based solution
achieves a suboptimal performance but with lower implementation complexity.Comment: 33 pages, 12 figures, as well as correcting the typos in equations
(4) and (5) in the previous versio
- …