10,178 research outputs found
Traffic Light Control Using Deep Policy-Gradient and Value-Function Based Reinforcement Learning
Recent advances in combining deep neural network architectures with
reinforcement learning techniques have shown promising potential results in
solving complex control problems with high dimensional state and action spaces.
Inspired by these successes, in this paper, we build two kinds of reinforcement
learning algorithms: deep policy-gradient and value-function based agents which
can predict the best possible traffic signal for a traffic intersection. At
each time step, these adaptive traffic light control agents receive a snapshot
of the current state of a graphical traffic simulator and produce control
signals. The policy-gradient based agent maps its observation directly to the
control signal, however the value-function based agent first estimates values
for all legal control signals. The agent then selects the optimal control
action with the highest value. Our methods show promising results in a traffic
network simulated in the SUMO traffic simulator, without suffering from
instability issues during the training process
Longitudinal Dynamic versus Kinematic Models for Car-Following Control Using Deep Reinforcement Learning
The majority of current studies on autonomous vehicle control via deep
reinforcement learning (DRL) utilize point-mass kinematic models, neglecting
vehicle dynamics which includes acceleration delay and acceleration command
dynamics. The acceleration delay, which results from sensing and actuation
delays, results in delayed execution of the control inputs. The acceleration
command dynamics dictates that the actual vehicle acceleration does not rise up
to the desired command acceleration instantaneously due to dynamics. In this
work, we investigate the feasibility of applying DRL controllers trained using
vehicle kinematic models to more realistic driving control with vehicle
dynamics. We consider a particular longitudinal car-following control, i.e.,
Adaptive Cruise Control (ACC), problem solved via DRL using a point-mass
kinematic model. When such a controller is applied to car following with
vehicle dynamics, we observe significantly degraded car-following performance.
Therefore, we redesign the DRL framework to accommodate the acceleration delay
and acceleration command dynamics by adding the delayed control inputs and the
actual vehicle acceleration to the reinforcement learning environment state,
respectively. The training results show that the redesigned DRL controller
results in near-optimal control performance of car following with vehicle
dynamics considered when compared with dynamic programming solutions.Comment: Accepted to 2019 IEEE Intelligent Transportation Systems Conferenc
- …