388 research outputs found

    Wide baseline stereo matching with convex bounded-distortion constraints

    Full text link
    Finding correspondences in wide baseline setups is a challenging problem. Existing approaches have focused largely on developing better feature descriptors for correspondence and on accurate recovery of epipolar line constraints. This paper focuses on the challenging problem of finding correspondences once approximate epipolar constraints are given. We introduce a novel method that integrates a deformation model. Specifically, we formulate the problem as finding the largest number of corresponding points related by a bounded distortion map that obeys the given epipolar constraints. We show that, while the set of bounded distortion maps is not convex, the subset of maps that obey the epipolar line constraints is convex, allowing us to introduce an efficient algorithm for matching. We further utilize a robust cost function for matching and employ majorization-minimization for its optimization. Our experiments indicate that our method finds significantly more accurate maps than existing approaches

    Euclidean reconstruction of natural underwater scenes using optic imagery sequence

    Get PDF
    The development of maritime applications require monitoring, studying and preserving of detailed and close observation on the underwater seafloor and objects. Stereo vision offers advanced technologies to build 3D models from 2D still overlapping images in a relatively inexpensive way. However, while image stereo matching is a necessary step in 3D reconstruction procedure, even the most robust dense matching techniques are not guaranteed to work for underwater images due to the challenging aquatic environment. In this thesis, in addition to a detailed introduction and research on the key components of building 3D models from optic images, a robust modified quasi-dense matching algorithm based on correspondence propagation and adaptive least square matching for underwater images is proposed and applied to some typical underwater image datasets. The experiments demonstrate the robustness and good performance of the proposed matching approach

    A Framework for SAR-Optical Stereogrammetry over Urban Areas

    Get PDF
    Currently, numerous remote sensing satellites provide a huge volume of diverse earth observation data. As these data show different features regarding resolution, accuracy, coverage, and spectral imaging ability, fusion techniques are required to integrate the different properties of each sensor and produce useful information. For example, synthetic aperture radar (SAR) data can be fused with optical imagery to produce 3D information using stereogrammetric methods. The main focus of this study is to investigate the possibility of applying a stereogrammetry pipeline to very-high-resolution (VHR) SAR-optical image pairs. For this purpose, the applicability of semi-global matching is investigated in this unconventional multi-sensor setting. To support the image matching by reducing the search space and accelerating the identification of correct, reliable matches, the possibility of establishing an epipolarity constraint for VHR SAR-optical image pairs is investigated as well. In addition, it is shown that the absolute geolocation accuracy of VHR optical imagery with respect to VHR SAR imagery such as provided by TerraSAR-X can be improved by a multi-sensor block adjustment formulation based on rational polynomial coefficients. Finally, the feasibility of generating point clouds with a median accuracy of about 2m is demonstrated and confirms the potential of 3D reconstruction from SAR-optical image pairs over urban areas.Comment: This is the pre-acceptance version, to read the final version, please go to ISPRS Journal of Photogrammetry and Remote Sensing on ScienceDirec

    Quantitative 3d reconstruction from scanning electron microscope images based on affine camera models

    Get PDF
    Scanning electron microscopes (SEMs) are versatile imaging devices for the micro-and nanoscale that find application in various disciplines such as the characterization of biological, mineral or mechanical specimen. Even though the specimen’s two-dimensional (2D) properties are provided by the acquired images, detailed morphological characterizations require knowledge about the three-dimensional (3D) surface structure. To overcome this limitation, a reconstruction routine is presented that allows the quantitative depth reconstruction from SEM image sequences. Based on the SEM’s imaging properties that can be well described by an affine camera, the proposed algorithms rely on the use of affine epipolar geometry, self-calibration via factorization and triangulation from dense correspondences. To yield the highest robustness and accuracy, different sub-models of the affine camera are applied to the SEM images and the obtained results are directly compared to confocal laser scanning microscope (CLSM) measurements to identify the ideal parametrization and underlying algorithms. To solve the rectification problem for stereo-pair images of an affine camera so that dense matching algorithms can be applied, existing approaches are adapted and extended to further enhance the yielded results. The evaluations of this study allow to specify the applicability of the affine camera models to SEM images and what accuracies can be expected for reconstruction routines based on self-calibration and dense matching algorithms. © MDPI AG. All rights reserved

    Dense Point Cloud Extraction From Oblique Imagery

    Get PDF
    With the increasing availability of low-cost digital cameras with small or medium sized sensors, more and more airborne images are available with high resolution, which enhances the possibility in establishing three dimensional models for urban areas. The high accuracy of representation of buildings in urban areas is required for asset valuation or disaster recovery. Many automatic methods for modeling and reconstruction are applied to aerial images together with Light Detection and Ranging (LiDAR) data. If LiDAR data are not provided, manual steps must be applied, which results in semi-automated technique. The automated extraction of 3D urban models can be aided by the automatic extraction of dense point clouds. The more dense the point clouds, the easier the modeling and the higher the accuracy. Also oblique aerial imagery provides more facade information than nadir images, such as building height and texture. So a method for automatic dense point cloud extraction from oblique images is desired. In this thesis, a modified workflow for the automated extraction of dense point clouds from oblique images is proposed and tested. The result reveals that this modified workflow works well and a very dense point cloud can be extracted from only two oblique images with slightly higher accuracy in flat areas than the one extracted by the original workflow. The original workflow was established by previous research at the Rochester Institute of Technology (RIT) for point cloud extraction from nadir images. For oblique images, a first modification is proposed in the feature detection part by replacing the Scale-Invariant Feature Transform (SIFT) algorithm with the Affine Scale-Invariant Feature Transform (ASIFT) algorithm. After that, in order to realize a very dense point cloud, the Semi-Global Matching (SGM) algorithm is implemented in the second modification to compute the disparity map from a stereo image pair, which can then be used to reproject pixels back to a point cloud. A noise removal step is added in the third modification. The point cloud from the modified workflow is much denser compared to the result from the original workflow. An accuracy assessment is made in the end to evaluate the point cloud extracted from the modified workflow. From the two flat areas, subsets of points are selected from both original and modified workflow, and then planes are fitted to them, respectively. The Mean Squared Error (MSE) of the points to the fitted plane is compared. The point subsets from the modified workflow have slightly lower MSEs than the ones from the original workflow, respectively. This suggests a much more dense and more accurate point cloud can lead to clear roof borders for roof extraction and improve the possibility of 3D feature detection for 3D point cloud registration
    • …
    corecore