9,792 research outputs found
Recommended from our members
Asynchronous FDRL-based low-latency computation offloading for integrated terrestrial and non-terrestrial power IoT
Integrated terrestrial and non-terrestrial power internet of things (IPIoT) has emerged as a paradigm shift to three-dimensional vertical communication networks for power systems in the 6G era. Computation offloading plays key roles in enabling real-time data processing and analysis for electric services. However, computation offloading in IPIoT still faces challenges of coupling between task offloading and computation resource allocation, resource heterogeneity and dynamics, and degraded model training caused by electromagnetic interference (EMI). In this article, we propose an asynchronous federated deep reinforcement learning (AFDRL)-based computation offloading framework for IPIoT, where models are uploaded asynchronously for federated averaging to relieve network congestion and improve global model training. Then, we propose Asynchronous fedeRated deep reinforcemenT learnIng-baSed low-laTency computation offloading algorithm (ARTIST) to realize low-latency computation offloading through joint optimization of task offloading and computation resource allocation. Particularly, ARTIST adopts EMI-aware federated set determination to remove aberrant local models from federated averaging and improve training accuracy. Next, a case study is developed to validate the excellent performance of ARTIST in reducing task offloading and total queuing delays
Cloud Chaser: Real Time Deep Learning Computer Vision on Low Computing Power Devices
Internet of Things(IoT) devices, mobile phones, and robotic systems are often
denied the power of deep learning algorithms due to their limited computing
power. However, to provide time-critical services such as emergency response,
home assistance, surveillance, etc, these devices often need real-time analysis
of their camera data. This paper strives to offer a viable approach to
integrate high-performance deep learning-based computer vision algorithms with
low-resource and low-power devices by leveraging the computing power of the
cloud. By offloading the computation work to the cloud, no dedicated hardware
is needed to enable deep neural networks on existing low computing power
devices. A Raspberry Pi based robot, Cloud Chaser, is built to demonstrate the
power of using cloud computing to perform real-time vision tasks. Furthermore,
to reduce latency and improve real-time performance, compression algorithms are
proposed and evaluated for streaming real-time video frames to the cloud.Comment: Accepted to The 11th International Conference on Machine Vision (ICMV
2018). Project site: https://zhengyiluo.github.io/projects/cloudchaser
DeepBrain: Experimental Evaluation of Cloud-Based Computation Offloading and Edge Computing in the Internet-of-Drones for Deep Learning Applications
This article belongs to the Special Issue Time-Sensitive Networks for Unmanned Aircraft SystemsUnmanned Aerial Vehicles (UAVs) have been very effective in collecting aerial images data for various Internet-of-Things (IoT)/smart cities applications such as search and rescue, surveillance, vehicle detection, counting, intelligent transportation systems, to name a few. However, the real-time processing of collected data on edge in the context of the Internet-of-Drones remains an open challenge because UAVs have limited energy capabilities, while computer vision techniquesconsume excessive energy and require abundant resources. This fact is even more critical when deep learning algorithms, such as convolutional neural networks (CNNs), are used for classification and detection. In this paper, we first propose a system architecture of computation offloading for Internet-connected drones. Then, we conduct a comprehensive experimental study to evaluate the performance in terms of energy, bandwidth, and delay of the cloud computation offloading approach versus the edge computing approach of deep learning applications in the context of UAVs. In particular, we investigate the tradeoff between the communication cost and the computation of the two candidate approaches experimentally. The main results demonstrate that the computation offloading approach allows us to provide much higher throughput (i.e., frames per second) as compared to the edge computing approach, despite the larger communication delays.info:eu-repo/semantics/publishedVersio
TPD: Temporal and Positional Computation Offloading with Dynamic and Dependent Tasks
With the rapid development of wireless communication technologies and the proliferation of the urban Internet of Things (IoT), the paradigm of mobile computing has been shifting from centralized clouds to edge networks. As an enabling paradigm for computation-intensive and latency-sensitive computation tasks, mobile edge computing (MEC) can provide in-proximity computing services for resource-constrained IoT devices. Nevertheless, it remains challenging to optimize computation offloading from IoT devices to heterogeneous edge servers, considering complex intertask dependency, limited bandwidth, and dynamic networks. In this paper, we address the above challenges in MEC with TPD, that is, temporal and positional computation offloading with dynamic-dependent tasks. In particular, we investigate channel interference and intertask dependency by considering the position and moment of computation offloading simultaneously. We define a novel criterion for assessing the criticality of each task, and we identify the critical path based on a directed acyclic graph of all tasks. Furthermore, we propose an online algorithm for finding the optimal computation offloading strategy with intertask dependency and adjusting the strategy in real-time when facing dynamic tasks. Extensive simulation results show that our algorithm reduces significantly the time to complete all tasks by 30-60% in different scenarios and takes less time to adjust the offloading strategy in dynamic MEC systems
Vehicular Fog Computing Enabled Real-time Collision Warning via Trajectory Calibration
Vehicular fog computing (VFC) has been envisioned as a promising paradigm for
enabling a variety of emerging intelligent transportation systems (ITS).
However, due to inevitable as well as non-negligible issues in wireless
communication, including transmission latency and packet loss, it is still
challenging in implementing safety-critical applications, such as real-time
collision warning in vehicular networks. In this paper, we present a vehicular
fog computing architecture, aiming at supporting effective and real-time
collision warning by offloading computation and communication overheads to
distributed fog nodes. With the system architecture, we further propose a
trajectory calibration based collision warning (TCCW) algorithm along with
tailored communication protocols. Specifically, an application-layer
vehicular-to-infrastructure (V2I) communication delay is fitted by the Stable
distribution with real-world field testing data. Then, a packet loss detection
mechanism is designed. Finally, TCCW calibrates real-time vehicle trajectories
based on received vehicle status including GPS coordinates, velocity,
acceleration, heading direction, as well as the estimation of communication
delay and the detection of packet loss. For performance evaluation, we build
the simulation model and implement conventional solutions including cloud-based
warning and fog-based warning without calibration for comparison. Real-vehicle
trajectories are extracted as the input, and the simulation results demonstrate
that the effectiveness of TCCW in terms of the highest precision and recall in
a wide range of scenarios
Estudio y evaluación de plataformas de distribución de cómputo intensivo sobre sistemas externos para sistemas empotrados.
Falta palabras claveNowadays, the capabilities of current embedded systems are constantly increasing, having a wide range of applications. However, there are a plethora of intensive computing tasks that, because of their hardware limitations, are unable to perform successfully. Moreover, there are innumerable tasks with strict deadlines to meet (e.g. Real
Time Systems). Because of that, the use of external platforms with sufficient computing power is becoming widespread, especially thanks to the advent of Cloud Computing in recent years. Its use for knowledge sharing and information storage has been demonstrated innumerable times in the literature. However, its core properties, such as dynamic scalability, energy efficiency, infinite resources... amongst others, also make
it the perfect candidate for computation off-loading. In this sense, this thesis demonstrates this fact in applying Cloud Computing in the area of Robotics (Cloud Robotics). This is done by building a 3D Point Cloud Extraction Platform, where robots can offload
the complex stereo vision task of obtaining a 3D Point Cloud (3DPC) from Stereo Frames. In addition to this, the platform was applied to a typical robotics application: a Navigation Assistant. Using this case, the core challenges of computation offloading were thoroughly analyzed: the role of communication technologies (with special focus on 802.11ac), the role of offloading models, how to overcome the problem of communication
delays by using predictive time corrections, until what extent offloading is a
better choice compared to processing on board... etc. Furthermore, real navigation tests were performed, showing that better navigation results are obtained when using computation offloading. This experience was a starting point for the final research of
this thesis: an extension of Amdahl’s Law for Cloud Computing. This will provide a better understanding of Computation Offloading’s inherent factors, especially focused on time and energy speedups. In addition to this, it helps to make some predictions regarding the future of Cloud Computing and computation offloading
Joint Computation Offloading and Prioritized Scheduling in Mobile Edge Computing
With the rapid development of smart phones, enormous amounts of data are generated and usually require intensive and real-time computation. Nevertheless, quality of service (QoS) is hardly to be met due to the tension between resourcelimited (battery, CPU power) devices and computation-intensive applications. Mobileedge computing (MEC) emerging as a promising technique can be used to copy with stringent requirements from mobile applications. By offloading computationally intensive workloads to edge server and applying efficient task scheduling, energy cost of mobiles could be significantly reduced and therefore greatly improve QoS, e.g., latency. This paper proposes a joint computation offloading and prioritized task scheduling scheme in a multi-user mobile-edge computing system. We investigate an energy minimizing task offloading strategy in mobile devices and develop an effective priority-based task scheduling algorithm with edge server. The execution time, energy consumption, execution cost, and bonus score against both the task data sizes and latency requirement is adopted as the performance metric. Performance evaluation results show that, the proposed algorithm significantly reduce task completion time, edge server VM usage cost, and improve QoS in terms of bonus score. Moreover, dynamic prioritized task scheduling is also discussed herein, results show dynamic thresholds setting realizes the optimal task scheduling. We believe that this work is significant to the emerging mobile-edge computing paradigm, and can be applied to other Internet of Things (IoT)-Edge applications
- …