36 research outputs found

    Navigating Occluded Intersections with Autonomous Vehicles using Deep Reinforcement Learning

    Full text link
    Providing an efficient strategy to navigate safely through unsignaled intersections is a difficult task that requires determining the intent of other drivers. We explore the effectiveness of Deep Reinforcement Learning to handle intersection problems. Using recent advances in Deep RL, we are able to learn policies that surpass the performance of a commonly-used heuristic approach in several metrics including task completion time and goal success rate and have limited ability to generalize. We then explore a system's ability to learn active sensing behaviors to enable navigating safely in the case of occlusions. Our analysis, provides insight into the intersection handling problem, the solutions learned by the network point out several shortcomings of current rule-based methods, and the failures of our current deep reinforcement learning system point to future research directions.Comment: IEEE International Conference on Robotics and Automation (ICRA 2018

    Longitudinal Dynamic versus Kinematic Models for Car-Following Control Using Deep Reinforcement Learning

    Full text link
    The majority of current studies on autonomous vehicle control via deep reinforcement learning (DRL) utilize point-mass kinematic models, neglecting vehicle dynamics which includes acceleration delay and acceleration command dynamics. The acceleration delay, which results from sensing and actuation delays, results in delayed execution of the control inputs. The acceleration command dynamics dictates that the actual vehicle acceleration does not rise up to the desired command acceleration instantaneously due to dynamics. In this work, we investigate the feasibility of applying DRL controllers trained using vehicle kinematic models to more realistic driving control with vehicle dynamics. We consider a particular longitudinal car-following control, i.e., Adaptive Cruise Control (ACC), problem solved via DRL using a point-mass kinematic model. When such a controller is applied to car following with vehicle dynamics, we observe significantly degraded car-following performance. Therefore, we redesign the DRL framework to accommodate the acceleration delay and acceleration command dynamics by adding the delayed control inputs and the actual vehicle acceleration to the reinforcement learning environment state, respectively. The training results show that the redesigned DRL controller results in near-optimal control performance of car following with vehicle dynamics considered when compared with dynamic programming solutions.Comment: Accepted to 2019 IEEE Intelligent Transportation Systems Conferenc

    Graph Neural Networks and Reinforcement Learning for Behavior Generation in Semantic Environments

    Full text link
    Most reinforcement learning approaches used in behavior generation utilize vectorial information as input. However, this requires the network to have a pre-defined input-size -- in semantic environments this means assuming the maximum number of vehicles. Additionally, this vectorial representation is not invariant to the order and number of vehicles. To mitigate the above-stated disadvantages, we propose combining graph neural networks with actor-critic reinforcement learning. As graph neural networks apply the same network to every vehicle and aggregate incoming edge information, they are invariant to the number and order of vehicles. This makes them ideal candidates to be used as networks in semantic environments -- environments consisting of objects lists. Graph neural networks exhibit some other advantages that make them favorable to be used in semantic environments. The relational information is explicitly given and does not have to be inferred. Moreover, graph neural networks propagate information through the network and can gather higher-degree information. We demonstrate our approach using a highway lane-change scenario and compare the performance of graph neural networks to conventional ones. We show that graph neural networks are capable of handling scenarios with a varying number and order of vehicles during training and application

    Intelligent Roundabout Insertion using Deep Reinforcement Learning

    Full text link
    An important topic in the autonomous driving research is the development of maneuver planning systems. Vehicles have to interact and negotiate with each other so that optimal choices, in terms of time and safety, are taken. For this purpose, we present a maneuver planning module able to negotiate the entering in busy roundabouts. The proposed module is based on a neural network trained to predict when and how entering the roundabout throughout the whole duration of the maneuver. Our model is trained with a novel implementation of A3C, which we will call Delayed A3C (D-A3C), in a synthetic environment where vehicles move in a realistic manner with interaction capabilities. In addition, the system is trained such that agents feature a unique tunable behavior, emulating real world scenarios where drivers have their own driving styles. Similarly, the maneuver can be performed using different aggressiveness levels, which is particularly useful to manage busy scenarios where conservative rule-based policies would result in undefined waits

    Deep Reinforcement Learning for Supply Chain Synchronization

    Get PDF
    Supply chain synchronization can prevent the “bullwhip effect” and significantly mitigate ripple effects caused by operational failures. This paper demonstrates how deep reinforcement learning agents based on the proximal policy optimization algorithm can synchronize inbound and outbound flows if end-toend visibility is provided. The paper concludes that the proposed solution has the potential to perform adaptive control in complex supply chains. Furthermore, the proposed approach is general, task unspecific, and adaptive in the sense that prior knowledge about the system is not required

    Controlling an Autonomous Vehicle with Deep Reinforcement Learning

    Full text link
    We present a control approach for autonomous vehicles based on deep reinforcement learning. A neural network agent is trained to map its estimated state to acceleration and steering commands given the objective of reaching a specific target state while considering detected obstacles. Learning is performed using state-of-the-art proximal policy optimization in combination with a simulated environment. Training from scratch takes five to nine hours. The resulting agent is evaluated within simulation and subsequently applied to control a full-size research vehicle. For this, the autonomous exploration of a parking lot is considered, including turning maneuvers and obstacle avoidance. Altogether, this work is among the first examples to successfully apply deep reinforcement learning to a real vehicle.Comment: Award as Best Student Paper at IEEE Intelligent Vehicles Symposium (IV), 201
    corecore