490 research outputs found

    Automatic Registration of Optical Aerial Imagery to a LiDAR Point Cloud for Generation of City Models

    Get PDF
    This paper presents a framework for automatic registration of both the optical and 3D structural information extracted from oblique aerial imagery to a Light Detection and Ranging (LiDAR) point cloud without prior knowledge of an initial alignment. The framework employs a coarse to fine strategy in the estimation of the registration parameters. First, a dense 3D point cloud and the associated relative camera parameters are extracted from the optical aerial imagery using a state-of-the-art 3D reconstruction algorithm. Next, a digital surface model (DSM) is generated from both the LiDAR and the optical imagery-derived point clouds. Coarse registration parameters are then computed from salient features extracted from the LiDAR and optical imagery-derived DSMs. The registration parameters are further refined using the iterative closest point (ICP) algorithm to minimize global error between the registered point clouds. The novelty of the proposed approach is in the computation of salient features from the DSMs, and the selection of matching salient features using geometric invariants coupled with Normalized Cross Correlation (NCC) match validation. The feature extraction and matching process enables the automatic estimation of the coarse registration parameters required for initializing the fine registration process. The registration framework is tested on a simulated scene and aerial datasets acquired in real urban environments. Results demonstrates the robustness of the framework for registering optical and 3D structural information extracted from aerial imagery to a LiDAR point cloud, when co-existing initial registration parameters are unavailable

    Hierarchical shape-based surface reconstruction for dense multi-view stereo

    Get PDF
    International audienceThe recent widespread availability of urban imagery has lead to a growing demand for automatic modeling from multiple images. However, modern image-based modeling research has focused either on highly detailed reconstructions of mostly small objects or on human-assisted simplified modeling. This paper presents a novel algorithm which automatically outputs a simplified, segmented model of a scene from a set of calibrated input images, capturing its essential geometric features. Our approach combines three successive steps. First, a dense point cloud is created from sparse depth maps computed from the input images. Then, shapes are robustly extracted from this set of points. Finally, a compact model of the scene is built from a spatial subdivision induced by these structures: this model is a global minimum of an energy accounting for the visibility of the final surface. The effectiveness of our method is demonstrated through several results on both synthetic and real data sets, illustrating the various benefits of our algorithm, its robustness and its relevance for architectural scenes

    Multi-view urban scene reconstruction in non-uniform volume

    Full text link
    This paper presents a new fully automatic approach for multi-view urban scene reconstruction. Our algorithm is based on the Manhattan-World assumption, which can provide compact models while preserving fidelity of synthetic architectures. Starting from a dense point cloud, we extract its main axes by global optimization, and construct a nonuniform volume based on them. A graph model is created from volume facets rather than voxels. Appropriate edge weights are defined to ensure the validity and quality of the surface reconstruction. Compared with the common pointcloud- to-model methods, the proposed methodology exploits image information to unveil the real structures of holes in the point cloud. Experiments demonstrate the encouraging performance of the algorithm. © 2013 SPIE

    Semantic Segmentation of 3D Textured Meshes for Urban Scene Analysis

    Get PDF
    International audienceClassifying 3D measurement data has become a core problem in photogram-metry and 3D computer vision, since the rise of modern multiview geometry techniques, combined with affordable range sensors. We introduce a Markov Random Field-based approach for segmenting textured meshes generated via multi-view stereo into urban classes of interest. The input mesh is first partitioned into small clusters, referred to as superfacets, from which geometric and photometric features are computed. A random forest is then trained to predict the class of each superfacet as well as its similarity with the neighboring superfacets. Similarity is used to assign the weights of the Markov Random Field pairwise-potential and accounts for contextual information between the classes. The experimental results illustrate the efficacy and accuracy of the proposed framework

    Toward 3D reconstruction of outdoor scenes using an MMW radar and a monocular vision sensor

    Get PDF
    International audienceIn this paper, we introduce a geometric method for 3D reconstruction of the exterior environment using a panoramic microwave radar and a camera. We rely on the complementarity of these two sensors considering the robustness to the environmental conditions and depth detection ability of the radar, on the one hand, and the high spatial resolution of a vision sensor, on the other. Firstly, geometric modeling of each sensor and of the entire system is presented. Secondly, we address the global calibration problem, which consists of finding the exact transformation between the sensors' coordinate systems. Two implementation methods are proposed and compared, based on the optimization of a non-linear criterion obtained from a set of radar-to-image target correspondences. Unlike existing methods, no special configuration of the 3D points is required for calibration. This makes the methods flexible and easy to use by a non-expert operator. Finally, we present a very simple, yet robust 3D reconstruction method based on the sensors' geometry. This method enables one to reconstruct observed features in 3D using one acquisition (static sensor), which is not always met in the state of the art for outdoor scene reconstruction.The proposed methods have been validated with synthetic and real data
    • …
    corecore