6 research outputs found
End-to-End Learning of Driving Models with Surround-View Cameras and Route Planners
For human drivers, having rear and side-view mirrors is vital for safe
driving. They deliver a more complete view of what is happening around the car.
Human drivers also heavily exploit their mental map for navigation.
Nonetheless, several methods have been published that learn driving models with
only a front-facing camera and without a route planner. This lack of
information renders the self-driving task quite intractable. We investigate the
problem in a more realistic setting, which consists of a surround-view camera
system with eight cameras, a route planner, and a CAN bus reader. In
particular, we develop a sensor setup that provides data for a 360-degree view
of the area surrounding the vehicle, the driving route to the destination, and
low-level driving maneuvers (e.g. steering angle and speed) by human drivers.
With such a sensor setup we collect a new driving dataset, covering diverse
driving scenarios and varying weather/illumination conditions. Finally, we
learn a novel driving model by integrating information from the surround-view
cameras and the route planner. Two route planners are exploited: 1) by
representing the planned routes on OpenStreetMap as a stack of GPS coordinates,
and 2) by rendering the planned routes on TomTom Go Mobile and recording the
progression into a video. Our experiments show that: 1) 360-degree
surround-view cameras help avoid failures made with a single front-view camera,
in particular for city driving and intersection scenarios; and 2) route
planners help the driving task significantly, especially for steering angle
prediction.Comment: to be published at ECCV 201
End-to-End Learning of Driving Models with Surround-View Cameras and Route Planners
ISSN:0302-9743ISSN:1611-334