2,956 research outputs found
Longitudinal Dynamic versus Kinematic Models for Car-Following Control Using Deep Reinforcement Learning
The majority of current studies on autonomous vehicle control via deep
reinforcement learning (DRL) utilize point-mass kinematic models, neglecting
vehicle dynamics which includes acceleration delay and acceleration command
dynamics. The acceleration delay, which results from sensing and actuation
delays, results in delayed execution of the control inputs. The acceleration
command dynamics dictates that the actual vehicle acceleration does not rise up
to the desired command acceleration instantaneously due to dynamics. In this
work, we investigate the feasibility of applying DRL controllers trained using
vehicle kinematic models to more realistic driving control with vehicle
dynamics. We consider a particular longitudinal car-following control, i.e.,
Adaptive Cruise Control (ACC), problem solved via DRL using a point-mass
kinematic model. When such a controller is applied to car following with
vehicle dynamics, we observe significantly degraded car-following performance.
Therefore, we redesign the DRL framework to accommodate the acceleration delay
and acceleration command dynamics by adding the delayed control inputs and the
actual vehicle acceleration to the reinforcement learning environment state,
respectively. The training results show that the redesigned DRL controller
results in near-optimal control performance of car following with vehicle
dynamics considered when compared with dynamic programming solutions.Comment: Accepted to 2019 IEEE Intelligent Transportation Systems Conferenc
Reinforcement Learning, Intelligent Control and their Applications in Connected and Autonomous Vehicles
Reinforcement learning (RL) has attracted large attention over the past few years. Recently, we developed a data-driven algorithm to solve predictive cruise control (PCC) and games output regulation problems. This work integrates our recent contributions to the application of RL in game theory, output regulation problems, robust control, small-gain theory and PCC. The algorithm was developed for adaptive optimal output regulation of uncertain linear systems, and uncertain partially linear systems to reject disturbance and also force the output of the systems to asymptotically track a reference. In the PCC problem, we determined the reference velocity for each autonomous vehicle in the platoon using the traffic information broadcasted from the lights to reduce the vehicles\u27 trip time. Then we employed the algorithm to design an approximate optimal controller for the vehicles. This controller is able to regulate the headway, velocity and acceleration of each vehicle to the desired values. Simulation results validate the effectiveness of the algorithms
Development of an Adaptive Model Predictive Control for Platooning Safety in Battery Electric Vehicles
The recent and continuous improvement in the transportation field provides several different opportunities for enhancing safety and comfort in passenger vehicles. In this context, Adaptive Cruise Control (ACC) might provide additional benefits, including smoothness of the traffic flow and collision avoidance. In addition, Vehicle-to-Vehicle (V2V) communication may be exploited in the car-following model to obtain further improvements in safety and comfort by guaranteeing fast response to critical events. In this paper, firstly an Adaptive Model Predictive Control was developed for managing the Cooperative ACC scenario of two vehicles; as a second step, the safety analysis during a cut-in maneuver was performed, extending the platooning vehicles’ number to four. The effectiveness of the proposed methodology was assessed for in different driving scenarios such as diverse cruising speeds, steep accelerations, and aggressive decelerations. Moreover, the controller was validated by considering various speed profiles of the leader vehicle, including a real drive cycle obtained using a random drive cycle generator software. Results demonstrated that the proposed control strategy was capable of ensuring safety in virtually all test cases and quickly responding to unexpected cut-in maneuvers. Indeed, different scenarios have been tested, including acceleration and deceleration phases at high speeds where the control strategy successfully avoided any collision and stabilized the vehicle platoon approximately 20–30 s after the sudden cut-in. Concerning the comfort, it was demonstrated that improvements were possible in the aggressive drive cycle whereas different scenarios were found in the random cycle, depending on where the cut-in maneuver occurred
Enhancing the performance of a safe controller via supervised learning for truck lateral control
Correct-by-construction techniques, such as control barrier functions (CBFs),
can be used to guarantee closed-loop safety by acting as a supervisor of an
existing or legacy controller. However, supervisory-control intervention
typically compromises the performance of the closed-loop system. On the other
hand, machine learning has been used to synthesize controllers that inherit
good properties from a training dataset, though safety is typically not
guaranteed due to the difficulty of analyzing the associated neural network. In
this paper, supervised learning is combined with CBFs to synthesize controllers
that enjoy good performance with provable safety. A training set is generated
by trajectory optimization that incorporates the CBF constraint for an
interesting range of initial conditions of the truck model. A control policy is
obtained via supervised learning that maps a feature representing the initial
conditions to a parameterized desired trajectory. The learning-based controller
is used as the performance controller and a CBF-based supervisory controller
guarantees safety. A case study of lane keeping for articulated trucks shows
that the controller trained by supervised learning inherits the good
performance of the training set and rarely requires intervention by the CBF
supervisorComment: submitted to IEEE Transaction of Control System Technolog
- …