1,288 research outputs found
Conditional Affordance Learning for Driving in Urban Environments
Most existing approaches to autonomous driving fall into one of two
categories: modular pipelines, that build an extensive model of the
environment, and imitation learning approaches, that map images directly to
control outputs. A recently proposed third paradigm, direct perception, aims to
combine the advantages of both by using a neural network to learn appropriate
low-dimensional intermediate representations. However, existing direct
perception approaches are restricted to simple highway situations, lacking the
ability to navigate intersections, stop at traffic lights or respect speed
limits. In this work, we propose a direct perception approach which maps video
input to intermediate representations suitable for autonomous navigation in
complex urban environments given high-level directional inputs. Compared to
state-of-the-art reinforcement and conditional imitation learning approaches,
we achieve an improvement of up to 68 % in goal-directed navigation on the
challenging CARLA simulation benchmark. In addition, our approach is the first
to handle traffic lights and speed signs by using image-level labels only, as
well as smooth car-following, resulting in a significant reduction of traffic
accidents in simulation.Comment: Accepted for Conference on Robot Learning (CoRL) 201
Virtual to Real Reinforcement Learning for Autonomous Driving
Reinforcement learning is considered as a promising direction for driving
policy learning. However, training autonomous driving vehicle with
reinforcement learning in real environment involves non-affordable
trial-and-error. It is more desirable to first train in a virtual environment
and then transfer to the real environment. In this paper, we propose a novel
realistic translation network to make model trained in virtual environment be
workable in real world. The proposed network can convert non-realistic virtual
image input into a realistic one with similar scene structure. Given realistic
frames as input, driving policy trained by reinforcement learning can nicely
adapt to real world driving. Experiments show that our proposed virtual to real
(VR) reinforcement learning (RL) works pretty well. To our knowledge, this is
the first successful case of driving policy trained by reinforcement learning
that can adapt to real world driving data
End-to-end Driving via Conditional Imitation Learning
Deep networks trained on demonstrations of human driving have learned to
follow roads and avoid obstacles. However, driving policies trained via
imitation learning cannot be controlled at test time. A vehicle trained
end-to-end to imitate an expert cannot be guided to take a specific turn at an
upcoming intersection. This limits the utility of such systems. We propose to
condition imitation learning on high-level command input. At test time, the
learned driving policy functions as a chauffeur that handles sensorimotor
coordination but continues to respond to navigational commands. We evaluate
different architectures for conditional imitation learning in vision-based
driving. We conduct experiments in realistic three-dimensional simulations of
urban driving and on a 1/5 scale robotic truck that is trained to drive in a
residential area. Both systems drive based on visual input yet remain
responsive to high-level navigational commands. The supplementary video can be
viewed at https://youtu.be/cFtnflNe5fMComment: Published at the International Conference on Robotics and Automation
(ICRA), 201
Exploring the Limitations of Behavior Cloning for Autonomous Driving
Driving requires reacting to a wide variety of complex environment conditions
and agent behaviors. Explicitly modeling each possible scenario is unrealistic.
In contrast, imitation learning can, in theory, leverage data from large fleets
of human-driven cars. Behavior cloning in particular has been successfully used
to learn simple visuomotor policies end-to-end, but scaling to the full
spectrum of driving behaviors remains an unsolved problem. In this paper, we
propose a new benchmark to experimentally investigate the scalability and
limitations of behavior cloning. We show that behavior cloning leads to
state-of-the-art results, including in unseen environments, executing complex
lateral and longitudinal maneuvers without these reactions being explicitly
programmed. However, we confirm well-known limitations (due to dataset bias and
overfitting), new generalization issues (due to dynamic objects and the lack of
a causal model), and training instability requiring further research before
behavior cloning can graduate to real-world driving. The code of the studied
behavior cloning approaches can be found at
https://github.com/felipecode/coiltraine
- …