1,872 research outputs found

    A Framework for SAR-Optical Stereogrammetry over Urban Areas

    Get PDF
    Currently, numerous remote sensing satellites provide a huge volume of diverse earth observation data. As these data show different features regarding resolution, accuracy, coverage, and spectral imaging ability, fusion techniques are required to integrate the different properties of each sensor and produce useful information. For example, synthetic aperture radar (SAR) data can be fused with optical imagery to produce 3D information using stereogrammetric methods. The main focus of this study is to investigate the possibility of applying a stereogrammetry pipeline to very-high-resolution (VHR) SAR-optical image pairs. For this purpose, the applicability of semi-global matching is investigated in this unconventional multi-sensor setting. To support the image matching by reducing the search space and accelerating the identification of correct, reliable matches, the possibility of establishing an epipolarity constraint for VHR SAR-optical image pairs is investigated as well. In addition, it is shown that the absolute geolocation accuracy of VHR optical imagery with respect to VHR SAR imagery such as provided by TerraSAR-X can be improved by a multi-sensor block adjustment formulation based on rational polynomial coefficients. Finally, the feasibility of generating point clouds with a median accuracy of about 2m is demonstrated and confirms the potential of 3D reconstruction from SAR-optical image pairs over urban areas.Comment: This is the pre-acceptance version, to read the final version, please go to ISPRS Journal of Photogrammetry and Remote Sensing on ScienceDirec

    Linear Features in Photogrammetry

    Get PDF
    This research addresses the task of including points as well as linear features in photogrammetric applications. Straight lines in object space can be utilized to perform aerial triangulation. Irregular linear features (natural lines) in object space can be utilized to perform single photo resection and automatic relative orientation. When working with primitives, it is important to develop appropriate representations in image and object space. These representations must accommodate for the perspective projection relating the two spaces. There are various options for representing linear features in the above applications. These options have been explored, and an optimal representation has been chosen. An aerial triangulation technique that utilizes points and straight lines for frame and linear array scanners has been implemented. For this task, the MSAT (Multi Sensor Aerial Triangulation) software, developed at the Ohio State University, has been extended to handle straight lines. The MSAT software accommodates for frame and linear array scanners. In this research, natural lines were utilized to perform single photo resection and automatic relative orientation. In single photo resection, the problem is approached with no knowledge of the correspondence of natural lines between image space and object space. In automatic relative orientation, the problem is approached without knowledge of conjugate linear features in the overlap of the stereopair. The matching problem and the appropriate parameters are determined by use of the modified generalized Hough transform. These techniques were tested using simulated and real data sets for frame imagery

    Improving ICP with Easy Implementation for Free Form Surface Matching

    Get PDF
    Automatic range image registration and matching is an attractive but unresolved problem in both the machine vision and pattern recognition literature. Since automatic range image registration and matching is inherently a very difficult problem, the algorithms developed are becoming more and more complicated. In this paper, we propose a novel practical algorithm for automatic free-form surface matching. This method directly manipulates the possible point matches established by the traditional ICP criterion based on both the collinearity and closeness constraints without any feature extraction, image pre-processing, or motion estimation from outliers corrupted data. A comparative study based on a large number of real range images has shown the accuracy and robustness of the novel algorithm

    Disparity map generation based on trapezoidal camera architecture for multiview video

    Get PDF
    Visual content acquisition is a strategic functional block of any visual system. Despite its wide possibilities, the arrangement of cameras for the acquisition of good quality visual content for use in multi-view video remains a huge challenge. This paper presents the mathematical description of trapezoidal camera architecture and relationships which facilitate the determination of camera position for visual content acquisition in multi-view video, and depth map generation. The strong point of Trapezoidal Camera Architecture is that it allows for adaptive camera topology by which points within the scene, especially the occluded ones can be optically and geometrically viewed from several different viewpoints either on the edge of the trapezoid or inside it. The concept of maximum independent set, trapezoid characteristics, and the fact that the positions of cameras (with the exception of few) differ in their vertical coordinate description could very well be used to address the issue of occlusion which continues to be a major problem in computer vision with regards to the generation of depth map

    A Stereo Vision Framework for 3-D Underwater Mosaicking

    Get PDF

    Robust Temporally Coherent Laplacian Protrusion Segmentation of 3D Articulated Bodies

    Get PDF
    In motion analysis and understanding it is important to be able to fit a suitable model or structure to the temporal series of observed data, in order to describe motion patterns in a compact way, and to discriminate between them. In an unsupervised context, i.e., no prior model of the moving object(s) is available, such a structure has to be learned from the data in a bottom-up fashion. In recent times, volumetric approaches in which the motion is captured from a number of cameras and a voxel-set representation of the body is built from the camera views, have gained ground due to attractive features such as inherent view-invariance and robustness to occlusions. Automatic, unsupervised segmentation of moving bodies along entire sequences, in a temporally-coherent and robust way, has the potential to provide a means of constructing a bottom-up model of the moving body, and track motion cues that may be later exploited for motion classification. Spectral methods such as locally linear embedding (LLE) can be useful in this context, as they preserve "protrusions", i.e., high-curvature regions of the 3D volume, of articulated shapes, while improving their separation in a lower dimensional space, making them in this way easier to cluster. In this paper we therefore propose a spectral approach to unsupervised and temporally-coherent body-protrusion segmentation along time sequences. Volumetric shapes are clustered in an embedding space, clusters are propagated in time to ensure coherence, and merged or split to accommodate changes in the body's topology. Experiments on both synthetic and real sequences of dense voxel-set data are shown. This supports the ability of the proposed method to cluster body-parts consistently over time in a totally unsupervised fashion, its robustness to sampling density and shape quality, and its potential for bottom-up model constructionComment: 31 pages, 26 figure

    Reliability in Constrained Gauss-Markov Models: An Analytical and Differential Approach with Applications in Photogrammetry

    Get PDF
    This report was prepared by Jackson Cothren, a graduate research associate in the Department of Civil and Environmental Engineering and Geodetic Science at the Ohio State University, under the supervision of Professor Burkhard Schaffrin.This report was also submitted to the Graduate School of the Ohio State University as a dissertation in partial fulfillment of the requirements for the Ph.D. degree.Reliability analysis explains the contribution of each observation in an estimation model to the overall redundancy of the model, taking into account the geometry of the network as well as the precision of the observations themselves. It is principally used to design networks resistant to outliers in the observations by making the outliers more detectible using standard statistical tests.It has been studied extensively, and principally, in Gauss- Markov models. We show how the same analysis may be extended to various constrained Gauss-Markov models and present preliminary work for its use in unconstrained Gauss-Helmert models. In particular, we analyze the prominent reliability matrix of the constrained model to separate the contribution of the constraints to the redundancy of the observations from the observations themselves. In addition, we make extensive use of matrix differential calculus to find the Jacobian of the reliability matrix with respect to the parameters that define the network through both the original design and constraint matrices. The resulting Jacobian matrix reveals the sensitivity of reliability matrix elements highlighting weak areas in the network where changes in observations may result in unreliable observations. We apply the analytical framework to photogrammetric networks in which exterior orientation parameters are directly observed by GPS/INS systems. Tie-point observations provide some redundancy and even a few collinear tie-point and tie-point distance constraints improve the reliability of these direct observations by as much as 33%. Using the same theory we compare networks in which tie-points are observed on multiple images (n-fold points) and tie-points are observed in photo pairs only (two-fold points). Apparently, the use of two-fold tiepoints does not significantly degrade the reliability of the direct exterior observation observations. Coplanarity constraints added to the common two-fold points do not add significantly to the reliability of the direct exterior orientation observations. The differential calculus results may also be used to provide a new measure of redundancy number stability in networks. We show that a typical photogrammetric network with n-fold tie-points was less stable with respect to at least some tie-point movement than an equivalent network with n-fold tie-points decomposed into many two-fold tie-points

    Disambiguating Multi–Modal Scene Representations Using Perceptual Grouping Constraints

    Get PDF
    In its early stages, the visual system suffers from a lot of ambiguity and noise that severely limits the performance of early vision algorithms. This article presents feedback mechanisms between early visual processes, such as perceptual grouping, stereopsis and depth reconstruction, that allow the system to reduce this ambiguity and improve early representation of visual information. In the first part, the article proposes a local perceptual grouping algorithm that — in addition to commonly used geometric information — makes use of a novel multi–modal measure between local edge/line features. The grouping information is then used to: 1) disambiguate stereopsis by enforcing that stereo matches preserve groups; and 2) correct the reconstruction error due to the image pixel sampling using a linear interpolation over the groups. The integration of mutual feedback between early vision processes is shown to reduce considerably ambiguity and noise without the need for global constraints
    • …
    corecore