15 research outputs found

    Real Time Dense Depth Estimation by Fusing Stereo with Sparse Depth Measurements

    Full text link
    We present an approach to depth estimation that fuses information from a stereo pair with sparse range measurements derived from a LIDAR sensor or a range camera. The goal of this work is to exploit the complementary strengths of the two sensor modalities, the accurate but sparse range measurements and the ambiguous but dense stereo information. These two sources are effectively and efficiently fused by combining ideas from anisotropic diffusion and semi-global matching. We evaluate our approach on the KITTI 2015 and Middlebury 2014 datasets, using randomly sampled ground truth range measurements as our sparse depth input. We achieve significant performance improvements with a small fraction of range measurements on both datasets. We also provide qualitative results from our platform using the PMDTec Monstar sensor. Our entire pipeline runs on an NVIDIA TX-2 platform at 5Hz on 1280x1024 stereo images with 128 disparity levels.Comment: 7 pages, 5 figures, 2 table

    Mesh-based 3D Textured Urban Mapping

    Get PDF
    In the era of autonomous driving, urban mapping represents a core step to let vehicles interact with the urban context. Successful mapping algorithms have been proposed in the last decade building the map leveraging on data from a single sensor. The focus of the system presented in this paper is twofold: the joint estimation of a 3D map from lidar data and images, based on a 3D mesh, and its texturing. Indeed, even if most surveying vehicles for mapping are endowed by cameras and lidar, existing mapping algorithms usually rely on either images or lidar data; moreover both image-based and lidar-based systems often represent the map as a point cloud, while a continuous textured mesh representation would be useful for visualization and navigation purposes. In the proposed framework, we join the accuracy of the 3D lidar data, and the dense information and appearance carried by the images, in estimating a visibility consistent map upon the lidar measurements, and refining it photometrically through the acquired images. We evaluate the proposed framework against the KITTI dataset and we show the performance improvement with respect to two state of the art urban mapping algorithms, and two widely used surface reconstruction algorithms in Computer Graphics.Comment: accepted at iros 201

    Instance Segmentation and Object Detection in Road Scenes using Inverse Perspective Mapping of 3D Point Clouds and 2D Images

    Get PDF
    The instance segmentation and object detection are important tasks in smart car applications. Recently, a variety of neural network-based approaches have been proposed. One of the challenges is that there are various scales of objects in a scene, and it requires the neural network to have a large receptive field to deal with the scale variations. In other words, the neural network must have deep architectures which slow down computation. In smart car applications, the accuracy of detection and segmentation of vehicle and pedestrian is hugely critical. Besides, 2D images do not have distance information but enough visual appearance. On the other hand, 3D point clouds have strong evidence of existence of objects. The fusion of 2D images and 3D point clouds can provide more information to seek out objects in a scene. This paper proposes a series of fronto-parallel virtual planes and inverse perspective mapping of an input image to the planes, to deal with scale variations. I use 3D point clouds obtained from LiDAR sensor and 2D images obtained from stereo cameras on top of a vehicle to estimate the ground area of the scene and to define virtual planes. Certain height from the ground area in 2D images is cropped to focus on objects on flat roads. Then, the point cloud is used to filter out false-alarms among the over-detection results generated by an off-the-shelf deep neural network, Mask RCNN. The experimental result showed that the proposed approach outperforms Mask RCNN without pre-processing on a benchmark dataset, KITTI dataset [9]

    Multi-modal Experts Network for Autonomous Driving

    Full text link
    End-to-end learning from sensory data has shown promising results in autonomous driving. While employing many sensors enhances world perception and should lead to more robust and reliable behavior of autonomous vehicles, it is challenging to train and deploy such network and at least two problems are encountered in the considered setting. The first one is the increase of computational complexity with the number of sensing devices. The other is the phenomena of network overfitting to the simplest and most informative input. We address both challenges with a novel, carefully tailored multi-modal experts network architecture and propose a multi-stage training procedure. The network contains a gating mechanism, which selects the most relevant input at each inference time step using a mixed discrete-continuous policy. We demonstrate the plausibility of the proposed approach on our 1/6 scale truck equipped with three cameras and one LiDAR.Comment: Published at the International Conference on Robotics and Automation (ICRA), 202

    Robust Fusion of LiDAR and Wide-Angle Camera Data for Autonomous Mobile Robots

    Get PDF
    Autonomous robots that assist humans in day to day living tasks are becoming increasingly popular. Autonomous mobile robots operate by sensing and perceiving their surrounding environment to make accurate driving decisions. A combination of several different sensors such as LiDAR, radar, ultrasound sensors and cameras are utilized to sense the surrounding environment of autonomous vehicles. These heterogeneous sensors simultaneously capture various physical attributes of the environment. Such multimodality and redundancy of sensing need to be positively utilized for reliable and consistent perception of the environment through sensor data fusion. However, these multimodal sensor data streams are different from each other in many ways, such as temporal and spatial resolution, data format, and geometric alignment. For the subsequent perception algorithms to utilize the diversity offered by multimodal sensing, the data streams need to be spatially, geometrically and temporally aligned with each other. In this paper, we address the problem of fusing the outputs of a Light Detection and Ranging (LiDAR) scanner and a wide-angle monocular image sensor for free space detection. The outputs of LiDAR scanner and the image sensor are of different spatial resolutions and need to be aligned with each other. A geometrical model is used to spatially align the two sensor outputs, followed by a Gaussian Process (GP) regression-based resolution matching algorithm to interpolate the missing data with quantifiable uncertainty. The results indicate that the proposed sensor data fusion framework significantly aids the subsequent perception steps, as illustrated by the performance improvement of a uncertainty aware free space detection algorith
    corecore