2 research outputs found
Reinforcement Learning-based Dynamic Service Placement in Vehicular Networks
The emergence of technologies such as 5G and mobile edge computing has
enabled provisioning of different types of services with different resource and
service requirements to the vehicles in a vehicular network.The growing
complexity of traffic mobility patterns and dynamics in the requests for
different types of services has made service placement a challenging task. A
typical static placement solution is not effective as it does not consider the
traffic mobility and service dynamics. In this paper, we propose a
reinforcement learning-based dynamic (RL-Dynamic) service placement framework
to find the optimal placement of services at the edge servers while considering
the vehicle's mobility and dynamics in the requests for different types of
services. We use SUMO and MATLAB to carry out simulation experiments. In our
learning framework, for the decision module, we consider two alternative
objective functions-minimizing delay and minimizing edge server utilization. We
developed an ILP based problem formulation for the two objective functions. The
experimental results show that 1) compared to static service placement,
RL-based dynamic service placement achieves fair utilization of edge server
resources and low service delay, and 2) compared to delay-optimized placement,
server utilization optimized placement utilizes resources more effectively,
achieving higher fairness with lower edge-server utilization.Comment: Accepted and presented in IEEE 93rd Vehicular Technology Conference
VTC2021-Sprin
DRLD-SP: A Deep Reinforcement Learning-based Dynamic Service Placement in Edge-Enabled Internet of Vehicles
The growth of 5G and edge computing has enabled the emergence of Internet of
Vehicles. It supports different types of services with different resource and
service requirements. However, limited resources at the edge, high mobility of
vehicles, increasing demand, and dynamicity in service request-types have made
service placement a challenging task. A typical static placement solution is
not effective as it does not consider the traffic mobility and service
dynamics. Handling dynamics in IoV for service placement is an important and
challenging problem which is the primary focus of our work in this paper. We
propose a Deep Reinforcement Learning-based Dynamic Service Placement (DRLD-SP)
framework with the objective of minimizing the maximum edge resource usage and
service delay while considering the vehicle's mobility, varying demand, and
dynamics in the requests for different types of services. We use SUMO and
MATLAB to carry out simulation experiments. The experimental results show that
the proposed DRLD-SP approach is effective and outperforms other static and
dynamic placement approaches.Comment: Submitted to IEEE Internet of Things Journa