10,014 research outputs found
End-to-end Driving via Conditional Imitation Learning
Deep networks trained on demonstrations of human driving have learned to
follow roads and avoid obstacles. However, driving policies trained via
imitation learning cannot be controlled at test time. A vehicle trained
end-to-end to imitate an expert cannot be guided to take a specific turn at an
upcoming intersection. This limits the utility of such systems. We propose to
condition imitation learning on high-level command input. At test time, the
learned driving policy functions as a chauffeur that handles sensorimotor
coordination but continues to respond to navigational commands. We evaluate
different architectures for conditional imitation learning in vision-based
driving. We conduct experiments in realistic three-dimensional simulations of
urban driving and on a 1/5 scale robotic truck that is trained to drive in a
residential area. Both systems drive based on visual input yet remain
responsive to high-level navigational commands. The supplementary video can be
viewed at https://youtu.be/cFtnflNe5fMComment: Published at the International Conference on Robotics and Automation
(ICRA), 201
On Offline Evaluation of Vision-based Driving Models
Autonomous driving models should ideally be evaluated by deploying them on a
fleet of physical vehicles in the real world. Unfortunately, this approach is
not practical for the vast majority of researchers. An attractive alternative
is to evaluate models offline, on a pre-collected validation dataset with
ground truth annotation. In this paper, we investigate the relation between
various online and offline metrics for evaluation of autonomous driving models.
We find that offline prediction error is not necessarily correlated with
driving quality, and two models with identical prediction error can differ
dramatically in their driving performance. We show that the correlation of
offline evaluation with driving quality can be significantly improved by
selecting an appropriate validation dataset and suitable offline metrics. The
supplementary video can be viewed at
https://www.youtube.com/watch?v=P8K8Z-iF0cYComment: Published at the ECCV 2018 conferenc
Exploring the Limitations of Behavior Cloning for Autonomous Driving
Driving requires reacting to a wide variety of complex environment conditions
and agent behaviors. Explicitly modeling each possible scenario is unrealistic.
In contrast, imitation learning can, in theory, leverage data from large fleets
of human-driven cars. Behavior cloning in particular has been successfully used
to learn simple visuomotor policies end-to-end, but scaling to the full
spectrum of driving behaviors remains an unsolved problem. In this paper, we
propose a new benchmark to experimentally investigate the scalability and
limitations of behavior cloning. We show that behavior cloning leads to
state-of-the-art results, including in unseen environments, executing complex
lateral and longitudinal maneuvers without these reactions being explicitly
programmed. However, we confirm well-known limitations (due to dataset bias and
overfitting), new generalization issues (due to dynamic objects and the lack of
a causal model), and training instability requiring further research before
behavior cloning can graduate to real-world driving. The code of the studied
behavior cloning approaches can be found at
https://github.com/felipecode/coiltraine
- …