2,206 research outputs found

    A Robust Quasi-dense Matching Approach for Underwater Images

    Get PDF
    While different techniques for finding dense correspondences in images taken in air have achieved significant success, application of these techniques to underwater imagery still presents a serious challenge, especially in the case of “monocular stereo” when images constituting a stereo pair are acquired asynchronously. This is generally because of the poor image quality which is inherent to imaging in aquatic environments (blurriness, range-dependent brightness and color variations, time-varying water column disturbances, etc.). The goal of this research is to develop a technique resulting in maximal number of successful matches (conjugate points) in two overlapping images. We propose a quasi-dense matching approach which works reliably for underwater imagery. The proposed approach starts with a sparse set of highly robust matches (seeds) and expands pair-wise matches into their neighborhoods. The Adaptive Least Square Matching (ALSM) is used during the search process to establish new matches to increase the robustness of the solution and avoid mismatches. Experiments on a typical underwater image dataset demonstrate promising results

    Semi-automated geomorphological mapping applied to landslide hazard analysis

    Get PDF
    Computer-assisted three-dimensional (3D) mapping using stereo and multi-image (“softcopy”) photogrammetry is shown to enhance the visual interpretation of geomorphology in steep terrain with the direct benefit of greater locational accuracy than traditional manual mapping. This would benefit multi-parameter correlations between terrain attributes and landslide distribution in both direct and indirect forms of landslide hazard assessment. Case studies involve synthetic models of a landslide, and field studies of a rock slope and steep undeveloped hillsides with both recently formed and partly degraded, old landslide scars. Diagnostic 3D morphology was generated semi-automatically both using a terrain-following cursor under stereo-viewing and from high resolution digital elevation models created using area-based image correlation, further processed with curvature algorithms. Laboratory-based studies quantify limitations of area-based image correlation for measurement of 3D points on planar surfaces with varying camera orientations. The accuracy of point measurement is shown to be non-linear with limiting conditions created by both narrow and wide camera angles and moderate obliquity of the target plane. Analysis of the results with the planar surface highlighted problems with the controlling parameters of the area-based image correlation process when used for generating DEMs from images obtained with a low-cost digital camera. Although the specific cause of the phase-wrapped image artefacts identified was not found, the procedure would form a suitable method for testing image correlation software, as these artefacts may not be obvious in DEMs of non-planar surfaces.Modelling of synthetic landslides shows that Fast Fourier Transforms are an efficient method for removing noise, as produced by errors in measurement of individual DEM points, enabling diagnostic morphological terrain elements to be extracted. Component landforms within landslides are complex entities and conversion of the automatically-defined morphology into geomorphology was only achieved with manual interpretation; however, this interpretation was facilitated by softcopy-driven stereo viewing of the morphological entities across the hillsides.In the final case study of a large landslide within a man-made slope, landslide displacements were measured using a photogrammetric model consisting of 79 images captured with a helicopter-borne, hand-held, small format digital camera. Displacement vectors and a thematic geomorphological map were superimposed over an animated, 3D photo-textured model to aid non-stereo visualisation and communication of results

    Plane extraction for indoor place recognition

    Get PDF
    In this paper, we present an image based plane extraction method well suited for real-time operations. Our approach exploits the assumption that the surrounding scene is mainly composed by planes disposed in known directions. Planes are detected from a single image exploiting a voting scheme that takes into account the vanishing lines. Then, candidate planes are validated and merged using a region grow- ing based approach to detect in real-time planes inside an unknown in- door environment. Using the related plane homographies is possible to remove the perspective distortion, enabling standard place recognition algorithms to work in an invariant point of view setup. Quantitative Ex- periments performed with real world images show the effectiveness of our approach compared with a very popular method

    Image-based metric heritage modeling in the near-infrared spectrum

    Get PDF
    Digital photogrammetry and spectral imaging are widely used in heritage sciences towards the comprehensive recording, understanding, and protection of historical artifacts and artworks. The availability of consumer-grade modified cameras for spectral acquisition, as an alternative to expensive multispectral sensors and multi-sensor apparatuses, along with semi-automatic software implementations of Structure-from-Motion (SfM) and Multiple-View-Stereo (MVS) algorithms, has made more feasible than ever the combination of those techniques. In the research presented here, the authors assess image-based modeling from near-infrared (NIR) imagery acquired with modified consumergrade cameras, with applications on tangible heritage. Three-dimensional (3D) meshes, textured with the non-visible data, are produced and evaluated. Specifically, metric evaluations are conducted through extensive comparisons with models produced with image-based modeling from visible (VIS) imagery and with structured light scanning, to check the accuracy of results. Furthermore, the authors observe and discuss, how the implemented NIR modeling approach, affects the surface of the reconstructed models, and may counteract specific problems which arise from lighting conditions during VIS acquisition. The radiometric properties of the produced results are evaluated, in comparison to the respective results in the visible spectrum, on the capacity to enhance observation towards the characterization of the surface and under-surface state of preservation, and consequently, to support conservation interventions

    From deposit to point cloud: a study of low-cost computer vision approaches for the straightforward documentation of archaeological excavations

    Get PDF
    Stratigraphic archaeological excavations demand high-resolution documentation techniques for 3D recording. Today, this is typically accomplished using total stations or terrestrial laser scanners. This paper demonstrates the potential of another technique that is low-cost and easy to execute. It takes advantage of software using Structure from Motion (SfM) algorithms, which are known for their ability to reconstruct camera pose and threedimensional scene geometry (rendered as a sparse point cloud) from a series of overlapping photographs captured by a camera moving around the scene. When complemented by stereo matching algorithms, detailed 3D surface models can be built from such relatively oriented photo collections in a fully automated way. The absolute orientation of the model can be derived by the manual measurement of control points. The approach is extremely flexible and appropriate to deal with a wide variety of imagery, because this computer vision approach can also work with imagery resulting from a randomly moving camera (i.e. uncontrolled conditions) and calibrated optics are not a prerequisite. For a few years, these algorithms are embedded in several free and low-cost software packages. This paper will outline how such a program can be applied to map archaeological excavations in a very fast and uncomplicated way, using imagery shot with a standard compact digital camera (even if the images were not taken for this purpose). Archived data from previous excavations of VIAS-University of Vienna has been chosen and the derived digital surface models and orthophotos have been examined for their usefulness for archaeological applications. The absolute georeferencing of the resulting surface models was performed with the manual identification of fourteen control points. In order to express the positional accuracy of the generated 3D surface models, the NSSDA guidelines were applied. Simultaneously acquired terrestrial laser scanning data – which had been processed in our standard workflow – was used to independently check the results. The vertical accuracy of the surface models generated by SfM was found to be within 0.04 m at the 95 % confidence interval, whereas several visual assessments proved a very high horizontal positional accuracy as well

    From Deposit to Point Cloud – a Study of Low-Cost Computer Vision Approaches for the Straightforward Documentation of Archaeological Excavations

    Get PDF
    Stratigraphic archaeological excavations demand high-resolution documentation techniques for 3D recording. Today, this is typically accomplished using total stations or terrestrial laser scanners. This paper demonstrates the potential of another technique that is low-cost and easy to execute. It takes advantage of software using Structure from Motion (SfM) algorithms, which are known for their ability to reconstruct camera pose and threedimensional scene geometry (rendered as a sparse point cloud) from a series of overlapping photographs captured by a camera moving around the scene. When complemented by stereo matching algorithms, detailed 3D surface models can be built from such relatively oriented photo collections in a fully automated way. The absolute orientation of the model can be derived by the manual measurement of control points. The approach is extremely flexible and appropriate to deal with a wide variety of imagery, because this computer vision approach can also work with imagery resulting from a randomly moving camera (i.e. uncontrolled conditions) and calibrated optics are not a prerequisite. For a few years, these algorithms are embedded in several free and low-cost software packages. This paper will outline how such a program can be applied to map archaeological excavations in a very fast and uncomplicated way, using imagery shot with a standard compact digital camera (even if the ima ges were not taken for this purpose). Archived data from previous excavations of VIAS-University of Vienna has been chosen and the derived digital surface models and orthophotos have been examined for their usefulness for archaeological applications. The a bsolute georeferencing of the resulting surface models was performed with the manual identification of fourteen control points. In order to express the positional accuracy of the generated 3D surface models, the NSSDA guidelines were applied.  Simultaneously acquired terrestrial laser scanning data – which had been processed in our standard workflow – was used to independently check the results. The vertical accuracy of the surface models generated by SfM was found to be within 0.04 m at the 95 % confidence interval, whereas several visual assessments proved a very high horizontal positional accuracy as well

    Landscapes of polyphase glaciation: eastern Hellas Planitia, Mars

    Get PDF
    <p>The mid-latitudes of Mars host numerous ice-related landforms that bear many similarities to terrestrial ice masses. This collection of landforms, termed viscous flow features (VFFs), is composed primarily of H<sub>2</sub>O ice and shows evidence of viscous deformation. Recent work has hypothesised that VFFs are the diminishing remains of once larger ice masses, formed during one or more previous ice ages, and the landscape therefore records evidence of polyphase glaciation. However, debate persists concerning the former extent and volume of ice, and style of former glaciations. The accompanying map (1:100,000 scale) presents a geomorphic and structural assessment of a glacial landscape in eastern Hellas Planitia, Mars. Here, we present a description of the features identified, comprising four geomorphic units (plains, lobate debris apron, degraded glacial material, and glacier-like form) and 16 structures (craters, moraine-like ridges, flow unit boundaries, arcuate transvers structures, longitudinal surface structures, ring-mold craters, terraces, medial moraine-like ridges, raised textured areas, flow-parallel and flow-transverse lineations, crevasses and crevasse traces, and ridge clusters).</p

    Semantically Informed Multiview Surface Refinement

    Full text link
    We present a method to jointly refine the geometry and semantic segmentation of 3D surface meshes. Our method alternates between updating the shape and the semantic labels. In the geometry refinement step, the mesh is deformed with variational energy minimization, such that it simultaneously maximizes photo-consistency and the compatibility of the semantic segmentations across a set of calibrated images. Label-specific shape priors account for interactions between the geometry and the semantic labels in 3D. In the semantic segmentation step, the labels on the mesh are updated with MRF inference, such that they are compatible with the semantic segmentations in the input images. Also, this step includes prior assumptions about the surface shape of different semantic classes. The priors induce a tight coupling, where semantic information influences the shape update and vice versa. Specifically, we introduce priors that favor (i) adaptive smoothing, depending on the class label; (ii) straightness of class boundaries; and (iii) semantic labels that are consistent with the surface orientation. The novel mesh-based reconstruction is evaluated in a series of experiments with real and synthetic data. We compare both to state-of-the-art, voxel-based semantic 3D reconstruction, and to purely geometric mesh refinement, and demonstrate that the proposed scheme yields improved 3D geometry as well as an improved semantic segmentation

    Three-Dimensional Thermal Mapping from IRT Images for Rapid Architectural Heritage NDT

    Get PDF
    Thermal infrared imaging is fundamental to architectural heritage non-destructive diagnostics. However, thermal sensors&rsquo; low spatial resolution allows capturing only very localized phenomena. At the same time, thermal images are commonly collected with independence of geometry, meaning that no measurements can be performed on them. Occasionally, these issues have been solved with various approaches integrating multi-sensor instrumentation, resulting in high costs and computational times. The presented work aims at tackling these problems by proposing a workflow for cost-effective three-dimensional thermographic modeling using a thermal camera and a consumer-grade RGB camera. The discussed approach exploits the RGB spectrum images captured with the optical sensor of the thermal camera and image-based multi-view stereo techniques to reconstruct architectural features&rsquo; geometry. The thermal and optical sensors are calibrated employing custom-made low-cost targets. Subsequently, the necessary geometric transformations between undistorted thermal infrared and optical images are calculated to replace them in the photogrammetric scene and map the models with thermal texture. The method&rsquo;s metric accuracy is evaluated by conducting comparisons with different sensors and the efficiency by assessing how the results can assist the better interpretation of the present thermal phenomena. The conducted application demonstrates the metric and radiometric performance of the proposed approach and the straightforward implementability for thermographic surveys, as well as its usefulness for cost-effective historical building assessments
    • …
    corecore