750 research outputs found
Deep Reinforcement Learning for Resource Allocation in V2V Communications
In this article, we develop a decentralized resource allocation mechanism for
vehicle-to-vehicle (V2V) communication systems based on deep reinforcement
learning. Each V2V link is considered as an agent, making its own decisions to
find optimal sub-band and power level for transmission. Since the proposed
method is decentralized, the global information is not required for each agent
to make its decisions, hence the transmission overhead is small. From the
simulation results, each agent can learn how to satisfy the V2V constraints
while minimizing the interference to vehicle-to-infrastructure (V2I)
communications
Reinforcement Learning Scheduler for Vehicle-to-Vehicle Communications Outside Coverage
Radio resources in vehicle-to-vehicle (V2V) communication can be scheduled
either by a centralized scheduler residing in the network (e.g., a base station
in case of cellular systems) or a distributed scheduler, where the resources
are autonomously selected by the vehicles. The former approach yields a
considerably higher resource utilization in case the network coverage is
uninterrupted. However, in case of intermittent or out-of-coverage, due to not
having input from centralized scheduler, vehicles need to revert to distributed
scheduling. Motivated by recent advances in reinforcement learning (RL), we
investigate whether a centralized learning scheduler can be taught to
efficiently pre-assign the resources to vehicles for out-of-coverage V2V
communication. Specifically, we use the actor-critic RL algorithm to train the
centralized scheduler to provide non-interfering resources to vehicles before
they enter the out-of-coverage area. Our initial results show that a RL-based
scheduler can achieve performance as good as or better than the state-of-art
distributed scheduler, often outperforming it. Furthermore, the learning
process completes within a reasonable time (ranging from a few hundred to a few
thousand epochs), thus making the RL-based scheduler a promising solution for
V2V communications with intermittent network coverage.Comment: Article published in IEEE VNC 201
Multi-Agent Reinforcement Learning for Joint Channel Assignment and Power Allocation in Platoon-Based C-V2X Systems
We consider the problem of joint channel assignment and power allocation in
underlaid cellular vehicular-to-everything (C-V2X) systems where multiple
vehicle-to-infrastructure (V2I) uplinks share the time-frequency resources with
multiple vehicle-to-vehicle (V2V) platoons that enable groups of connected and
autonomous vehicles to travel closely together. Due to the nature of fast
channel variant in vehicular environment, traditional centralized optimization
approach relying on global channel information might not be viable in C-V2X
systems with large number of users. Utilizing a reinforcement learning (RL)
approach, we propose a distributed resource allocation (RA) algorithm to
overcome this challenge. Specifically, we model the RA problem as a multi-agent
system. Based solely on the local channel information, each platoon leader, who
acts as an agent, collectively interacts with each other and accordingly
selects the optimal combination of sub-band and power level to transmit its
signals. Toward this end, we utilize the double deep Q-learning algorithm to
jointly train the agents under the objectives of simultaneously maximizing the
V2I sum-rate and satisfying the packet delivery probability of each V2V link in
a desired latency limitation. Simulation results show that our proposed
RL-based algorithm achieves a close performance compared to that of the
well-known exhaustive search algorithm.Comment: 6 pages, 4 figure
- …