5,163 research outputs found

    Automatic Registration of Optical Aerial Imagery to a LiDAR Point Cloud for Generation of City Models

    Get PDF
    This paper presents a framework for automatic registration of both the optical and 3D structural information extracted from oblique aerial imagery to a Light Detection and Ranging (LiDAR) point cloud without prior knowledge of an initial alignment. The framework employs a coarse to fine strategy in the estimation of the registration parameters. First, a dense 3D point cloud and the associated relative camera parameters are extracted from the optical aerial imagery using a state-of-the-art 3D reconstruction algorithm. Next, a digital surface model (DSM) is generated from both the LiDAR and the optical imagery-derived point clouds. Coarse registration parameters are then computed from salient features extracted from the LiDAR and optical imagery-derived DSMs. The registration parameters are further refined using the iterative closest point (ICP) algorithm to minimize global error between the registered point clouds. The novelty of the proposed approach is in the computation of salient features from the DSMs, and the selection of matching salient features using geometric invariants coupled with Normalized Cross Correlation (NCC) match validation. The feature extraction and matching process enables the automatic estimation of the coarse registration parameters required for initializing the fine registration process. The registration framework is tested on a simulated scene and aerial datasets acquired in real urban environments. Results demonstrates the robustness of the framework for registering optical and 3D structural information extracted from aerial imagery to a LiDAR point cloud, when co-existing initial registration parameters are unavailable

    Robust Building-based Registration of Airborne LiDAR Data and Optical Imagery on Urban Scenes

    Get PDF
    The motivation of this paper is to address the problem of registering airborne LiDAR data and optical aerial or satellite imagery acquired from different platforms, at different times, with different points of view and levels of detail. In this paper, we present a robust registration method based on building regions, which are extracted from optical images using mean shift segmentation, and from LiDAR data using a 3D point cloud filtering process. The matching of the extracted building segments is then carried out using Graph Transformation Matching (GTM) which allows to determine a common pattern of relative positions of segment centers. Thanks to this registration, the relative shifts between the data sets are significantly reduced, which enables a subsequent fine registration and a resulting high-quality data fusion

    Towards Automatic SAR-Optical Stereogrammetry over Urban Areas using Very High Resolution Imagery

    Full text link
    In this paper we discuss the potential and challenges regarding SAR-optical stereogrammetry for urban areas, using very-high-resolution (VHR) remote sensing imagery. Since we do this mainly from a geometrical point of view, we first analyze the height reconstruction accuracy to be expected for different stereogrammetric configurations. Then, we propose a strategy for simultaneous tie point matching and 3D reconstruction, which exploits an epipolar-like search window constraint. To drive the matching and ensure some robustness, we combine different established handcrafted similarity measures. For the experiments, we use real test data acquired by the Worldview-2, TerraSAR-X and MEMPHIS sensors. Our results show that SAR-optical stereogrammetry using VHR imagery is generally feasible with 3D positioning accuracies in the meter-domain, although the matching of these strongly hetereogeneous multi-sensor data remains very challenging. Keywords: Synthetic Aperture Radar (SAR), optical images, remote sensing, data fusion, stereogrammetr

    A Synergistic Approach for Recovering Occlusion-Free Textured 3D Maps of Urban Facades from Heterogeneous Cartographic Data

    Get PDF
    In this paper we present a practical approach for generating an occlusion-free textured 3D map of urban facades by the synergistic use of terrestrial images, 3D point clouds and area-based information. Particularly in dense urban environments, the high presence of urban objects in front of the facades causes significant difficulties for several stages in computational building modeling. Major challenges lie on the one hand in extracting complete 3D facade quadrilateral delimitations and on the other hand in generating occlusion-free facade textures. For these reasons, we describe a straightforward approach for completing and recovering facade geometry and textures by exploiting the data complementarity of terrestrial multi-source imagery and area-based information

    Automatic Registration of Terrestrial Laser Scanning Point Clouds using Panoramic Reflectance Images

    Get PDF
    This paper presents a new approach to the automatic registration of terrestrial laser scanning (TLS) point clouds using panoramic reflectance images. The approach follows a two-step procedure that includes both pair-wise registration and global registration. The pair-wise registration consists of image matching (pixel-to-pixel correspondence) and point cloud registration (point-to-point correspondence), as the correspondence between the image and the point cloud (pixel-to-point) is inherent to the reflectance images. False correspondences are removed by a geometric invariance check. The pixel-to-point correspondence and the computation of the rigid transformation parameters (RTPs) are integrated into an iterative process that allows for the pair-wise registration to be optimised. The global registration of all point clouds is obtained by a bundle adjustment using a circular self-closure constraint. Our approach is tested with both indoor and outdoor scenes acquired by a FARO LS 880 laser scanner with an angular resolution of 0.036° and 0.045°, respectively. The results show that the pair-wise and global registration accuracies are of millimetre and centimetre orders, respectively, and that the process is fully automatic and converges quickly
    • 

    corecore