70,539 research outputs found
VIENA2: A Driving Anticipation Dataset
Action anticipation is critical in scenarios where one needs to react before
the action is finalized. This is, for instance, the case in automated driving,
where a car needs to, e.g., avoid hitting pedestrians and respect traffic
lights. While solutions have been proposed to tackle subsets of the driving
anticipation tasks, by making use of diverse, task-specific sensors, there is
no single dataset or framework that addresses them all in a consistent manner.
In this paper, we therefore introduce a new, large-scale dataset, called
VIENA2, covering 5 generic driving scenarios, with a total of 25 distinct
action classes. It contains more than 15K full HD, 5s long videos acquired in
various driving conditions, weathers, daytimes and environments, complemented
with a common and realistic set of sensor measurements. This amounts to more
than 2.25M frames, each annotated with an action label, corresponding to 600
samples per action class. We discuss our data acquisition strategy and the
statistics of our dataset, and benchmark state-of-the-art action anticipation
techniques, including a new multi-modal LSTM architecture with an effective
loss function for action anticipation in driving scenarios.Comment: Accepted in ACCV 201
LIDAR-based Driving Path Generation Using Fully Convolutional Neural Networks
In this work, a novel learning-based approach has been developed to generate
driving paths by integrating LIDAR point clouds, GPS-IMU information, and
Google driving directions. The system is based on a fully convolutional neural
network that jointly learns to carry out perception and path generation from
real-world driving sequences and that is trained using automatically generated
training examples. Several combinations of input data were tested in order to
assess the performance gain provided by specific information modalities. The
fully convolutional neural network trained using all the available sensors
together with driving directions achieved the best MaxF score of 88.13% when
considering a region of interest of 60x60 meters. By considering a smaller
region of interest, the agreement between predicted paths and ground-truth
increased to 92.60%. The positive results obtained in this work indicate that
the proposed system may help fill the gap between low-level scene parsing and
behavior-reflex approaches by generating outputs that are close to vehicle
control and at the same time human-interpretable.Comment: Changed title, formerly "Simultaneous Perception and Path Generation
Using Fully Convolutional Neural Networks
Event-based Vision meets Deep Learning on Steering Prediction for Self-driving Cars
Event cameras are bio-inspired vision sensors that naturally capture the
dynamics of a scene, filtering out redundant information. This paper presents a
deep neural network approach that unlocks the potential of event cameras on a
challenging motion-estimation task: prediction of a vehicle's steering angle.
To make the best out of this sensor-algorithm combination, we adapt
state-of-the-art convolutional architectures to the output of event sensors and
extensively evaluate the performance of our approach on a publicly available
large scale event-camera dataset (~1000 km). We present qualitative and
quantitative explanations of why event cameras allow robust steering prediction
even in cases where traditional cameras fail, e.g. challenging illumination
conditions and fast motion. Finally, we demonstrate the advantages of
leveraging transfer learning from traditional to event-based vision, and show
that our approach outperforms state-of-the-art algorithms based on standard
cameras.Comment: 9 pages, 8 figures, 6 tables. Video: https://youtu.be/_r_bsjkJTH
- …