27,714 research outputs found

    LIDAR-Camera Fusion for Road Detection Using Fully Convolutional Neural Networks

    Full text link
    In this work, a deep learning approach has been developed to carry out road detection by fusing LIDAR point clouds and camera images. An unstructured and sparse point cloud is first projected onto the camera image plane and then upsampled to obtain a set of dense 2D images encoding spatial information. Several fully convolutional neural networks (FCNs) are then trained to carry out road detection, either by using data from a single sensor, or by using three fusion strategies: early, late, and the newly proposed cross fusion. Whereas in the former two fusion approaches, the integration of multimodal information is carried out at a predefined depth level, the cross fusion FCN is designed to directly learn from data where to integrate information; this is accomplished by using trainable cross connections between the LIDAR and the camera processing branches. To further highlight the benefits of using a multimodal system for road detection, a data set consisting of visually challenging scenes was extracted from driving sequences of the KITTI raw data set. It was then demonstrated that, as expected, a purely camera-based FCN severely underperforms on this data set. A multimodal system, on the other hand, is still able to provide high accuracy. Finally, the proposed cross fusion FCN was evaluated on the KITTI road benchmark where it achieved excellent performance, with a MaxF score of 96.03%, ranking it among the top-performing approaches

    Integrating data from 3D CAD and 3D cameras for Real-Time Modeling

    Get PDF
    In a reversal of historic trends, the capital facilities industry is expressing an increasing desire for automation of equipment and construction processes. Simultaneously, the industry has become conscious that higher levels of interoperability are a key towards higher productivity and safer projects. In complex, dynamic, and rapidly changing three-dimensional (3D) environments such as facilities sites, cutting-edge 3D sensing technologies and processing algorithms are one area of development that can dramatically impact those projects factors. New 3D technologies are now being developed, with among them 3D camera. The main focus here is an investigation of the feasibility of rapidly combining and comparing – integrating – 3D sensed data (from a 3D camera) and 3D CAD data. Such a capability could improve construction quality assessment, facility aging assessment, as well as rapid environment reconstruction and construction automation. Some preliminary results are presented here. They deal with the challenge of fusing sensed and CAD data that are completely different in nature

    3D modeling of indoor environments by a mobile platform with a laser scanner and panoramic camera

    Get PDF
    One major challenge of 3DTV is content acquisition. Here, we present a method to acquire a realistic, visually convincing D model of indoor environments based on a mobile platform that is equipped with a laser range scanner and a panoramic camera. The data of the 2D laser scans are used to solve the simultaneous lo- calization and mapping problem and to extract walls. Textures for walls and floor are built from the images of a calibrated panoramic camera. Multiresolution blending is used to hide seams in the gen- erated textures. The scene is further enriched by 3D-geometry cal- culated from a graph cut stereo technique. We present experimental results from a moderately large real environment.
    • …
    corecore