600 research outputs found

    Longitudinal Dynamic versus Kinematic Models for Car-Following Control Using Deep Reinforcement Learning

    Full text link
    The majority of current studies on autonomous vehicle control via deep reinforcement learning (DRL) utilize point-mass kinematic models, neglecting vehicle dynamics which includes acceleration delay and acceleration command dynamics. The acceleration delay, which results from sensing and actuation delays, results in delayed execution of the control inputs. The acceleration command dynamics dictates that the actual vehicle acceleration does not rise up to the desired command acceleration instantaneously due to dynamics. In this work, we investigate the feasibility of applying DRL controllers trained using vehicle kinematic models to more realistic driving control with vehicle dynamics. We consider a particular longitudinal car-following control, i.e., Adaptive Cruise Control (ACC), problem solved via DRL using a point-mass kinematic model. When such a controller is applied to car following with vehicle dynamics, we observe significantly degraded car-following performance. Therefore, we redesign the DRL framework to accommodate the acceleration delay and acceleration command dynamics by adding the delayed control inputs and the actual vehicle acceleration to the reinforcement learning environment state, respectively. The training results show that the redesigned DRL controller results in near-optimal control performance of car following with vehicle dynamics considered when compared with dynamic programming solutions.Comment: Accepted to 2019 IEEE Intelligent Transportation Systems Conferenc

    Cooperative Adaptive Cruise Control Based on Reinforcement Learning for Heavy-Duty BEVs

    Get PDF
    Advanced driver assistance systems (ADAS) are playing an increasingly important role in supporting the driver to create safer and more efficient driving conditions. Among all ADAS, adaptive cruise control (ACC) is a system that provides consistent aid, especially in highway mobility, guaranteeing safety by minimizing the possible risk of collision due to variations in the speed of the vehicle in front, automatically adjusting the vehicle velocity and maintaining the correct spacing. Theoretically, this type of system also makes it possible to optimize road throughput, increasing its capacity and reducing traffic congestion. However, it was found in practice that the current generation of ACC systems does not guarantee the so-called string stability of a vehicle platoon and can therefore lead to an actual decrease in traffic capacity. To overcome these issues, new cooperative adaptive cruise control (CACC) systems are being proposed that exploit vehicle-to-vehicle (V2V) connectivity, which can provide additional safety and robustness guarantees and introduce the possibility of concretely improving traffic flow stability

    Acceleration control strategy for Battery Electric Vehicle based on Deep Reinforcement Learning in V2V driving

    Get PDF
    The transportation sector is seeing the flourishing of one of the most interesting technologies, autonomous driving (AD). In particular, Cooperative Adaptive Cruise Control (CACC) systems ensure higher levels both of safety and comfort, enhancing at the same time the reduction of energy consumption. In this framework a real-time velocity planner for a Battery Electric Vehicle, based on a Deep Reinforcement Learning algorithm called Deep Deterministic Policy Gradient (DDPG), has been developed, aiming at maximizing energy savings, and improving comfort, thanks to the exchange of information on distance, speed and acceleration through the exploitation of vehicle-to-vehicle technology (V2V). The aforementioned DDPG algorithm relies on a multi-objective reward function that is adaptive to different driving cycles. The simulation results show how the agent can obtain good results on standard cycles, such as WLTP, UDDS and AUDC, and on real-world driving cycles. Moreover, it displays great adaptability to driving cycles different from the training one

    Weakly Supervised Reinforcement Learning for Autonomous Highway Driving via Virtual Safety Cages

    Full text link
    The use of neural networks and reinforcement learning has become increasingly popular in autonomous vehicle control. However, the opaqueness of the resulting control policies presents a significant barrier to deploying neural network-based control in autonomous vehicles. In this paper, we present a reinforcement learning based approach to autonomous vehicle longitudinal control, where the rule-based safety cages provide enhanced safety for the vehicle as well as weak supervision to the reinforcement learning agent. By guiding the agent to meaningful states and actions, this weak supervision improves the convergence during training and enhances the safety of the final trained policy. This rule-based supervisory controller has the further advantage of being fully interpretable, thereby enabling traditional validation and verification approaches to ensure the safety of the vehicle. We compare models with and without safety cages, as well as models with optimal and constrained model parameters, and show that the weak supervision consistently improves the safety of exploration, speed of convergence, and model performance. Additionally, we show that when the model parameters are constrained or sub-optimal, the safety cages can enable a model to learn a safe driving policy even when the model could not be trained to drive through reinforcement learning alone.Comment: Published in Sensor
    • …
    corecore