97,888 research outputs found

    Analysis of mobile laser scanning data and multi-view image reconstruction

    Get PDF
    The combination of laser scanning (LS, active, direct 3D measurement of the object surface) and photogrammetry (high geometric and radiometric resolution) is widely applied for object reconstruction (e.g. architecture, topography, monitoring, archaeology). Usually the results are a coloured point cloud or a textured mesh. The geometry is typically generated from the laser scanning point cloud and the radiometric information is the result of image acquisition. In the last years, next to significant developments in static (terrestrial LS) and kinematic LS (airborne and mobile LS) hardware and software, research in computer vision and photogrammetry lead to advanced automated procedures in image orientation and image matching. These methods allow a highly automated generation of 3D geometry just based on image data. Founded on advanced feature detector techniques (like SIFT (Scale Invariant Feature Transform)) very robust techniques for image orientation were established (cf. Bundler). In a subsequent step, dense multi-view stereo reconstruction algorithms allow the generation of very dense 3D point clouds that represent the scene geometry (cf. Patch-based Multi-View Stereo (PMVS2)). Within this paper the usage of mobile laser scanning (MLS) and simultaneously acquired image data for an advanced integrated scene reconstruction is studied. For the analysis the geometry of a scene is generated by both techniques independently. Then, the paper focuses on the quality assessment of both techniques. This includes a quality analysis of the individual surface models and a comparison of the direct georeferencing of the images using positional and orientation data of the on board GNSS-INS system and the indirect georeferencing of the imagery by automatic image orientation. For the practical evaluation a dataset from an archaeological monument is utilised. Based on the gained knowledge a discussion of the results is provided and a future strategy for the integration of both techniques is proposed

    Assessment of a photogrammetric approach for urban DSM extraction from tri-stereoscopic satellite imagery

    Get PDF
    Built-up environments are extremely complex for 3D surface modelling purposes. The main distortions that hamper 3D reconstruction from 2D imagery are image dissimilarities, concealed areas, shadows, height discontinuities and discrepancies between smooth terrain and man-made features. A methodology is proposed to improve automatic photogrammetric extraction of an urban surface model from high resolution satellite imagery with the emphasis on strategies to reduce the effects of the cited distortions and to make image matching more robust. Instead of a standard stereoscopic approach, a digital surface model is derived from tri-stereoscopic satellite imagery. This is based on an extensive multi-image matching strategy that fully benefits from the geometric and radiometric information contained in the three images. The bundled triplet consists of an IKONOS along-track pair and an additional near-nadir IKONOS image. For the tri-stereoscopic study a densely built-up area, extending from the centre of Istanbul to the urban fringe, is selected. The accuracy of the model extracted from the IKONOS triplet, as well as the model extracted from only the along-track stereopair, are assessed by comparison with 3D check points and 3D building vector data

    Virtual Rephotography: Novel View Prediction Error for 3D Reconstruction

    Full text link
    The ultimate goal of many image-based modeling systems is to render photo-realistic novel views of a scene without visible artifacts. Existing evaluation metrics and benchmarks focus mainly on the geometric accuracy of the reconstructed model, which is, however, a poor predictor of visual accuracy. Furthermore, using only geometric accuracy by itself does not allow evaluating systems that either lack a geometric scene representation or utilize coarse proxy geometry. Examples include light field or image-based rendering systems. We propose a unified evaluation approach based on novel view prediction error that is able to analyze the visual quality of any method that can render novel views from input images. One of the key advantages of this approach is that it does not require ground truth geometry. This dramatically simplifies the creation of test datasets and benchmarks. It also allows us to evaluate the quality of an unknown scene during the acquisition and reconstruction process, which is useful for acquisition planning. We evaluate our approach on a range of methods including standard geometry-plus-texture pipelines as well as image-based rendering techniques, compare it to existing geometry-based benchmarks, and demonstrate its utility for a range of use cases.Comment: 10 pages, 12 figures, paper was submitted to ACM Transactions on Graphics for revie
    corecore