34,490 research outputs found
LIDAR-Camera Fusion for Road Detection Using Fully Convolutional Neural Networks
In this work, a deep learning approach has been developed to carry out road
detection by fusing LIDAR point clouds and camera images. An unstructured and
sparse point cloud is first projected onto the camera image plane and then
upsampled to obtain a set of dense 2D images encoding spatial information.
Several fully convolutional neural networks (FCNs) are then trained to carry
out road detection, either by using data from a single sensor, or by using
three fusion strategies: early, late, and the newly proposed cross fusion.
Whereas in the former two fusion approaches, the integration of multimodal
information is carried out at a predefined depth level, the cross fusion FCN is
designed to directly learn from data where to integrate information; this is
accomplished by using trainable cross connections between the LIDAR and the
camera processing branches.
To further highlight the benefits of using a multimodal system for road
detection, a data set consisting of visually challenging scenes was extracted
from driving sequences of the KITTI raw data set. It was then demonstrated
that, as expected, a purely camera-based FCN severely underperforms on this
data set. A multimodal system, on the other hand, is still able to provide high
accuracy. Finally, the proposed cross fusion FCN was evaluated on the KITTI
road benchmark where it achieved excellent performance, with a MaxF score of
96.03%, ranking it among the top-performing approaches
Combining LiDAR Space Clustering and Convolutional Neural Networks for Pedestrian Detection
Pedestrian detection is an important component for safety of autonomous
vehicles, as well as for traffic and street surveillance. There are extensive
benchmarks on this topic and it has been shown to be a challenging problem when
applied on real use-case scenarios. In purely image-based pedestrian detection
approaches, the state-of-the-art results have been achieved with convolutional
neural networks (CNN) and surprisingly few detection frameworks have been built
upon multi-cue approaches. In this work, we develop a new pedestrian detector
for autonomous vehicles that exploits LiDAR data, in addition to visual
information. In the proposed approach, LiDAR data is utilized to generate
region proposals by processing the three dimensional point cloud that it
provides. These candidate regions are then further processed by a
state-of-the-art CNN classifier that we have fine-tuned for pedestrian
detection. We have extensively evaluated the proposed detection process on the
KITTI dataset. The experimental results show that the proposed LiDAR space
clustering approach provides a very efficient way of generating region
proposals leading to higher recall rates and fewer misses for pedestrian
detection. This indicates that LiDAR data can provide auxiliary information for
CNN-based approaches
A LiDAR Point Cloud Generator: from a Virtual World to Autonomous Driving
3D LiDAR scanners are playing an increasingly important role in autonomous
driving as they can generate depth information of the environment. However,
creating large 3D LiDAR point cloud datasets with point-level labels requires a
significant amount of manual annotation. This jeopardizes the efficient
development of supervised deep learning algorithms which are often data-hungry.
We present a framework to rapidly create point clouds with accurate point-level
labels from a computer game. The framework supports data collection from both
auto-driving scenes and user-configured scenes. Point clouds from auto-driving
scenes can be used as training data for deep learning algorithms, while point
clouds from user-configured scenes can be used to systematically test the
vulnerability of a neural network, and use the falsifying examples to make the
neural network more robust through retraining. In addition, the scene images
can be captured simultaneously in order for sensor fusion tasks, with a method
proposed to do automatic calibration between the point clouds and captured
scene images. We show a significant improvement in accuracy (+9%) in point
cloud segmentation by augmenting the training dataset with the generated
synthesized data. Our experiments also show by testing and retraining the
network using point clouds from user-configured scenes, the weakness/blind
spots of the neural network can be fixed
- …