13 research outputs found
Vehicle Speed Aware Computing Task Offloading and Resource Allocation Based on Multi-Agent Reinforcement Learning in a Vehicular Edge Computing Network
For in-vehicle application, the vehicles with different speeds have different
delay requirements. However, vehicle speeds have not been extensively explored,
which may cause mismatching between vehicle speed and its allocated computation
and wireless resource. In this paper, we propose a vehicle speed aware task
offloading and resource allocation strategy, to decrease the energy cost of
executing tasks without exceeding the delay constraint. First, we establish the
vehicle speed aware delay constraint model based on different speeds and task
types. Then, the delay and energy cost of task execution in VEC server and
local terminal are calculated. Next, we formulate a joint optimization of task
offloading and resource allocation to minimize vehicles' energy cost subject to
delay constraints. MADDPG method is employed to obtain offloading and resource
allocation strategy. Simulation results show that our algorithm can achieve
superior performance on energy cost and task completion delay.Comment: 8 pages, 6 figures, Accepted by IEEE International Conference on Edge
Computing 202
Efficient RSU Selection Approaches for Load Balancing in Vehicular Ad Hoc Networks
Due to advances in wireless communication technologies, wireless transmissions gradually replace traditional wired data transmissions. In recent years, vehicles on the move can also enjoy the convenience of wireless communication technologies by assisting each other in message exchange and form an interconnecting network, namely Vehicular Ad Hoc Networks (VANETs). In a VANET, each vehicle is capable of communicating with nearby vehicles and accessing information provided by the network. There are two basic communication models in VANETs, V2V and V2I. Vehicles equipped with wireless transceiver can communicate with other vehicles (V2V) or roadside units (RSUs) (V2I). RSUs acting as gateways are entry points to the Internet for vehicles. Naturally, vehicles tend to choose nearby RSUs as serving gateways. However, due to uneven density distribution and high mobility nature of vehicles, load imbalance of RSUs can happen. In this paper, we study the RSU load-balancing problem and propose two solutions. In the first solution, the whole network is divided into sub-regions based on RSUs’ locations. A RSU provides Internet access for vehicles in its sub-region and the boundaries between sub-regions change dynamically to adopt to load migration. In the second solution, vehicles choose their serving RSUs distributedly by taking their future trajectories and RSUs’ loading information into considerations. From simulation results, the proposed methods can improve packet delivery ratio, packet delay, and load balance among RSUs
An Optimized Multi-Layer Resource Management in Mobile Edge Computing Networks: A Joint Computation Offloading and Caching Solution
Nowadays, data caching is being used as a high-speed data storage layer in
mobile edge computing networks employing flow control methodologies at an
exponential rate. This study shows how to discover the best architecture for
backhaul networks with caching capability using a distributed offloading
technique. This article used a continuous power flow analysis to achieve the
optimum load constraints, wherein the power of macro base stations with various
caching capacities is supplied by either an intelligent grid network or
renewable energy systems. This work proposes ubiquitous connectivity between
users at the cell edge and offloading the macro cells so as to provide features
the macro cell itself cannot cope with, such as extreme changes in the required
user data rate and energy efficiency. The offloading framework is then reformed
into a neural weighted framework that considers convergence and Lyapunov
instability requirements of mobile-edge computing under Karush Kuhn Tucker
optimization restrictions in order to get accurate solutions. The cell-layer
performance is analyzed in the boundary and in the center point of the cells.
The analytical and simulation results show that the suggested method
outperforms other energy-saving techniques. Also, compared to other solutions
studied in the literature, the proposed approach shows a two to three times
increase in both the throughput of the cell edge users and the aggregate
throughput per cluster
Deep Reinforcement Learning-Based Offloading Scheduling for Vehicular Edge Computing
This is the author accepted manuscript. The final version is available from IEEE via the DOI in this recordVehicular edge computing (VEC) is a new computing paradigm that has great potential to enhance the capability of vehicle terminals (VT) to support resource-hungry in-vehicle applications with low latency and high energy efficiency. In this paper, we investigate an important computation offloading scheduling problem in a typical VEC scenario, where a VT traveling along an expressway intends to schedule its tasks waiting in the queue to minimize the long-term cost in terms of a trade-off between task latency and energy consumption. Due to diverse task characteristics, dynamic wireless environment, and frequent handover events caused by vehicle movements, an optimal solution should take into account both where to schedule (i.e., local computation or offloading) and when to schedule (i.e., the order and time for execution) each task. To solve such a complicated stochastic optimization problem, we model it by a carefully designed Markov decision process (MDP) and resort to deep reinforcement learning (DRL) to deal with the enormous state space. Our DRL implementation is designed based on the state-of-the-art proximal policy optimization (PPO) algorithm. A parameter-shared network architecture combined with a convolutional neural network (CNN) is utilized to approximate both policy and value function, which can effectively extract representative features. A series of adjustments to the state and reward representations are taken to further improve the training efficiency. Extensive simulation experiments and comprehensive comparisons with six known baseline algorithms and their heuristic combinations clearly demonstrate the advantages of the proposed DRL-based offloading scheduling method.European Commissio