2,694 research outputs found
Socially Trusted Collaborative Edge Computing in Ultra Dense Networks
Small cell base stations (SBSs) endowed with cloud-like computing
capabilities are considered as a key enabler of edge computing (EC), which
provides ultra-low latency and location-awareness for a variety of emerging
mobile applications and the Internet of Things. However, due to the limited
computation resources of an individual SBS, providing computation services of
high quality to its users faces significant challenges when it is overloaded
with an excessive amount of computation workload. In this paper, we propose
collaborative edge computing among SBSs by forming SBS coalitions to share
computation resources with each other, thereby accommodating more computation
workload in the edge system and reducing reliance on the remote cloud. A novel
SBS coalition formation algorithm is developed based on the coalitional game
theory to cope with various new challenges in small-cell-based edge systems,
including the co-provisioning of radio access and computing services,
cooperation incentives, and potential security risks. To address these
challenges, the proposed method (1) allows collaboration at both the user-SBS
association stage and the SBS peer offloading stage by exploiting the ultra
dense deployment of SBSs, (2) develops a payment-based incentive mechanism that
implements proportionally fair utility division to form stable SBS coalitions,
and (3) builds a social trust network for managing security risks among SBSs
due to collaboration. Systematic simulations in practical scenarios are carried
out to evaluate the efficacy and performance of the proposed method, which
shows that tremendous edge computing performance improvement can be achieved.Comment: arXiv admin note: text overlap with arXiv:1010.4501 by other author
Joint Computation Offloading and Prioritized Scheduling in Mobile Edge Computing
With the rapid development of smart phones, enormous amounts of data are generated and usually require intensive and real-time computation. Nevertheless, quality of service (QoS) is hardly to be met due to the tension between resourcelimited (battery, CPU power) devices and computation-intensive applications. Mobileedge computing (MEC) emerging as a promising technique can be used to copy with stringent requirements from mobile applications. By offloading computationally intensive workloads to edge server and applying efficient task scheduling, energy cost of mobiles could be significantly reduced and therefore greatly improve QoS, e.g., latency. This paper proposes a joint computation offloading and prioritized task scheduling scheme in a multi-user mobile-edge computing system. We investigate an energy minimizing task offloading strategy in mobile devices and develop an effective priority-based task scheduling algorithm with edge server. The execution time, energy consumption, execution cost, and bonus score against both the task data sizes and latency requirement is adopted as the performance metric. Performance evaluation results show that, the proposed algorithm significantly reduce task completion time, edge server VM usage cost, and improve QoS in terms of bonus score. Moreover, dynamic prioritized task scheduling is also discussed herein, results show dynamic thresholds setting realizes the optimal task scheduling. We believe that this work is significant to the emerging mobile-edge computing paradigm, and can be applied to other Internet of Things (IoT)-Edge applications
A Time-driven Data Placement Strategy for a Scientific Workflow Combining Edge Computing and Cloud Computing
Compared to traditional distributed computing environments such as grids,
cloud computing provides a more cost-effective way to deploy scientific
workflows. Each task of a scientific workflow requires several large datasets
that are located in different datacenters from the cloud computing environment,
resulting in serious data transmission delays. Edge computing reduces the data
transmission delays and supports the fixed storing manner for scientific
workflow private datasets, but there is a bottleneck in its storage capacity.
It is a challenge to combine the advantages of both edge computing and cloud
computing to rationalize the data placement of scientific workflow, and
optimize the data transmission time across different datacenters. Traditional
data placement strategies maintain load balancing with a given number of
datacenters, which results in a large data transmission time. In this study, a
self-adaptive discrete particle swarm optimization algorithm with genetic
algorithm operators (GA-DPSO) was proposed to optimize the data transmission
time when placing data for a scientific workflow. This approach considered the
characteristics of data placement combining edge computing and cloud computing.
In addition, it considered the impact factors impacting transmission delay,
such as the band-width between datacenters, the number of edge datacenters, and
the storage capacity of edge datacenters. The crossover operator and mutation
operator of the genetic algorithm were adopted to avoid the premature
convergence of the traditional particle swarm optimization algorithm, which
enhanced the diversity of population evolution and effectively reduced the data
transmission time. The experimental results show that the data placement
strategy based on GA-DPSO can effectively reduce the data transmission time
during workflow execution combining edge computing and cloud computing
CRIME: Input-Dependent Collaborative Inference for Recurrent Neural Networks
The excellent accuracy of Recurrent Neural Networks (RNNs) for time-series and natural language processing comes at the cost of computational complexity. Therefore, the choice between edge and cloud computing for RNN inference, with the goal of minimizing response time or energy consumption, is not trivial. An edge approach must deal with the aforementioned complexity, while a cloud solution pays large time and energy costs for data transmission. Collaborative inference is a technique that tries to obtain the best of both worlds, by splitting the inference task among a network of collaborating devices. While already investigated for other types of neural networks, collaborative inference for RNNs poses completely new challenges, such as the strong influence of input length on processing time and energy, and is greatly unexplored.In this paper, we introduce a Collaborative RNN Inference Mapping Engine(CRIME), which automatically selects the best inference device for each input. CRIME is flexible with respect to the connection topology among collaborating devices, and adapts to changes in the connections statuses and in the devices loads. With experiments on several RNNs and datasets, we show that CRIME can reduce the execution time (or end-node energy) by more than 25% compared to any single-device approach
- …