2 research outputs found
Deterministic Guided LiDAR Depth Map Completion
Accurate dense depth estimation is crucial for autonomous vehicles to analyze
their environment. This paper presents a non-deep learning-based approach to
densify a sparse LiDAR-based depth map using a guidance RGB image. To achieve
this goal the RGB image is at first cleared from most of the camera-LiDAR
misalignment artifacts. Afterward, it is over segmented and a plane for each
superpixel is approximated. In the case a superpixel is not well represented by
a plane, a plane is approximated for a convex hull of the most inlier. Finally,
the pinhole camera model is used for the interpolation process and the
remaining areas are interpolated. The evaluation of this work is executed using
the KITTI depth completion benchmark, which validates the proposed work and
shows that it outperforms the state-of-the-art non-deep learning-based methods,
in addition to several deep learning-based methods.Comment: Submitted to 2021 IEEE Intelligent Vehicles Symposium (IV21). This
work has been submitted to the IEEE for possible publication. Copyright may
be transferred without notice, after which this version may no longer be
accessibl
Environment reconstruction on depth images using Generative Adversarial Networks
Robust perception systems are essential for autonomous vehicle safety. To
navigate in a complex urban environment, it is necessary precise sensors with
reliable data. The task of understanding the surroundings is hard by itself;
for intelligent vehicles, it is even more critical due to the high speed in
which the vehicle navigates. To successfully navigate in an urban environment,
the perception system must quickly receive, process, and execute an action to
guarantee both passenger and pedestrian safety. Stereo cameras collect
environment information at many levels, e.g., depth, color, texture, shape,
which guarantee ample knowledge about the surroundings. Even so, when compared
to human, computational methods lack the ability to deal with missing
information, i.e., occlusions. For many perception tasks, this lack of data can
be a hindrance due to the environment incomplete information. In this paper, we
address this problem and discuss recent methods to deal with occluded areas
inference. We then introduce a loss function focused on disparity and
environment depth data reconstruction, and a Generative Adversarial Network
(GAN) architecture able to deal with occluded information inference. Our
results present a coherent reconstruction on depth maps, estimating regions
occluded by different obstacles. Our final contribution is a loss function
focused on disparity data and a GAN able to extract depth features and estimate
depth data by inpainting disparity images.Comment: 12 pages; 10 figures; open sourced; code and demo available in
https://github.com/nuneslu/VeIGA