18,642 research outputs found

    MAMUD : contribution of HR satellite imagery to a better monitoring, modeling and understanding of urban dynamics

    Get PDF
    In this treatise the discussion of a methodology and results of semi-automatic city DSM extrac-tion from an Ikonos triplet, is introduced. Built-up areas are known as being complex for photogrammetric purposes, partly because of the steep changes in elevation caused by buildings and urban features. To make DSM extraction more robust and to cope with the specific problems of height displacement, concealed areas and shadow, a multi-image based approach is followed. For the VHR tri-stereoscopic study an area extending from the centre of Istanbul to the urban fringe is chosen. Research will concentrate, in first phase on the development of methods to optimize the extraction of photogrammetric products from the bundled Ikonos triplet. Optimal methods need to be found to improve the radiometry and geometry of the imagery, to improve the semi-automatically derivation of DSM’s and to improve the postprocessing of the products. Secondly we will also investigate the possibilities of creating stereo models out of images from the same sensor taken on a different date, e.g. one image of the stereo pair combined with the third image. Finally the photogrammetric products derived from the Ikonos stereo pair as well as the products created out of the triplet and the constructed stereo models will be investigated by comparison with a 3D reference. This evaluation should show the increase of accuracy when multi-imagery is used instead of stereo pairs

    A Framework for SAR-Optical Stereogrammetry over Urban Areas

    Get PDF
    Currently, numerous remote sensing satellites provide a huge volume of diverse earth observation data. As these data show different features regarding resolution, accuracy, coverage, and spectral imaging ability, fusion techniques are required to integrate the different properties of each sensor and produce useful information. For example, synthetic aperture radar (SAR) data can be fused with optical imagery to produce 3D information using stereogrammetric methods. The main focus of this study is to investigate the possibility of applying a stereogrammetry pipeline to very-high-resolution (VHR) SAR-optical image pairs. For this purpose, the applicability of semi-global matching is investigated in this unconventional multi-sensor setting. To support the image matching by reducing the search space and accelerating the identification of correct, reliable matches, the possibility of establishing an epipolarity constraint for VHR SAR-optical image pairs is investigated as well. In addition, it is shown that the absolute geolocation accuracy of VHR optical imagery with respect to VHR SAR imagery such as provided by TerraSAR-X can be improved by a multi-sensor block adjustment formulation based on rational polynomial coefficients. Finally, the feasibility of generating point clouds with a median accuracy of about 2m is demonstrated and confirms the potential of 3D reconstruction from SAR-optical image pairs over urban areas.Comment: This is the pre-acceptance version, to read the final version, please go to ISPRS Journal of Photogrammetry and Remote Sensing on ScienceDirec

    Semi-automated geomorphological mapping applied to landslide hazard analysis

    Get PDF
    Computer-assisted three-dimensional (3D) mapping using stereo and multi-image (“softcopy”) photogrammetry is shown to enhance the visual interpretation of geomorphology in steep terrain with the direct benefit of greater locational accuracy than traditional manual mapping. This would benefit multi-parameter correlations between terrain attributes and landslide distribution in both direct and indirect forms of landslide hazard assessment. Case studies involve synthetic models of a landslide, and field studies of a rock slope and steep undeveloped hillsides with both recently formed and partly degraded, old landslide scars. Diagnostic 3D morphology was generated semi-automatically both using a terrain-following cursor under stereo-viewing and from high resolution digital elevation models created using area-based image correlation, further processed with curvature algorithms. Laboratory-based studies quantify limitations of area-based image correlation for measurement of 3D points on planar surfaces with varying camera orientations. The accuracy of point measurement is shown to be non-linear with limiting conditions created by both narrow and wide camera angles and moderate obliquity of the target plane. Analysis of the results with the planar surface highlighted problems with the controlling parameters of the area-based image correlation process when used for generating DEMs from images obtained with a low-cost digital camera. Although the specific cause of the phase-wrapped image artefacts identified was not found, the procedure would form a suitable method for testing image correlation software, as these artefacts may not be obvious in DEMs of non-planar surfaces.Modelling of synthetic landslides shows that Fast Fourier Transforms are an efficient method for removing noise, as produced by errors in measurement of individual DEM points, enabling diagnostic morphological terrain elements to be extracted. Component landforms within landslides are complex entities and conversion of the automatically-defined morphology into geomorphology was only achieved with manual interpretation; however, this interpretation was facilitated by softcopy-driven stereo viewing of the morphological entities across the hillsides.In the final case study of a large landslide within a man-made slope, landslide displacements were measured using a photogrammetric model consisting of 79 images captured with a helicopter-borne, hand-held, small format digital camera. Displacement vectors and a thematic geomorphological map were superimposed over an animated, 3D photo-textured model to aid non-stereo visualisation and communication of results

    Trying to break new ground in aerial archaeology

    Get PDF
    Aerial reconnaissance continues to be a vital tool for landscape-oriented archaeological research. Although a variety of remote sensing platforms operate within the earth’s atmosphere, the majority of aerial archaeological information is still derived from oblique photographs collected during observer-directed reconnaissance flights, a prospection approach which has dominated archaeological aerial survey for the past century. The resulting highly biased imagery is generally catalogued in sub-optimal (spatial) databases, if at all, after which a small selection of images is orthorectified and interpreted. For decades, this has been the standard approach. Although many innovations, including digital cameras, inertial units, photogrammetry and computer vision algorithms, geographic(al) information systems and computing power have emerged, their potential has not yet been fully exploited in order to re-invent and highly optimise this crucial branch of landscape archaeology. The authors argue that a fundamental change is needed to transform the way aerial archaeologists approach data acquisition and image processing. By addressing the very core concepts of geographically biased aerial archaeological photographs and proposing new imaging technologies, data handling methods and processing procedures, this paper gives a personal opinion on how the methodological components of aerial archaeology, and specifically aerial archaeological photography, should evolve during the next decade if developing a more reliable record of our past is to be our central aim. In this paper, a possible practical solution is illustrated by outlining a turnkey aerial prospection system for total coverage survey together with a semi-automated back-end pipeline that takes care of photograph correction and image enhancement as well as the management and interpretative mapping of the resulting data products. In this way, the proposed system addresses one of many bias issues in archaeological research: the bias we impart to the visual record as a result of selective coverage. While the total coverage approach outlined here may not altogether eliminate survey bias, it can vastly increase the amount of useful information captured during a single reconnaissance flight while mitigating the discriminating effects of observer-based, on-the-fly target selection. Furthermore, the information contained in this paper should make it clear that with current technology it is feasible to do so. This can radically alter the basis for aerial prospection and move landscape archaeology forward, beyond the inherently biased patterns that are currently created by airborne archaeological prospection

    Refined Equivalent Pinhole Model for Large-scale 3D Reconstruction from Spaceborne CCD Imagery

    Full text link
    In this study, we present a large-scale earth surface reconstruction pipeline for linear-array charge-coupled device (CCD) satellite imagery. While mainstream satellite image-based reconstruction approaches perform exceptionally well, the rational functional model (RFM) is subject to several limitations. For example, the RFM has no rigorous physical interpretation and differs significantly from the pinhole imaging model; hence, it cannot be directly applied to learning-based 3D reconstruction networks and to more novel reconstruction pipelines in computer vision. Hence, in this study, we introduce a method in which the RFM is equivalent to the pinhole camera model (PCM), meaning that the internal and external parameters of the pinhole camera are used instead of the rational polynomial coefficient parameters. We then derive an error formula for this equivalent pinhole model for the first time, demonstrating the influence of the image size on the accuracy of the reconstruction. In addition, we propose a polynomial image refinement model that minimizes equivalent errors via the least squares method. The experiments were conducted using four image datasets: WHU-TLC, DFC2019, ISPRS-ZY3, and GF7. The results demonstrated that the reconstruction accuracy was proportional to the image size. Our polynomial image refinement model significantly enhanced the accuracy and completeness of the reconstruction, and achieved more significant improvements for larger-scale images.Comment: 24 page

    Coordinates and maps of the Apollo 17 landing site

    Get PDF
    We carried out an extensive cartographic analysis of the Apollo 17 landing site and determined and mapped positions of the astronauts, their equipment, and lunar landmarks with accuracies of better than ±1 m in most cases. To determine coordinates in a lunar body‐fixed coordinate frame, we applied least squares (2‐D) network adjustments to angular measurements made in astronaut imagery (Hasselblad frames). The measured angular networks were accurately tied to lunar landmarks provided by a 0.5 m/pixel, controlled Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) orthomosaic of the entire Taurus‐Littrow Valley. Furthermore, by applying triangulation on measurements made in Hasselblad frames providing stereo views, we were able to relate individual instruments of the Apollo Lunar Surface Experiment Package (ALSEP) to specific features captured in LROC imagery and, also, to determine coordinates of astronaut equipment or other surface features not captured in the orbital images, for example, the deployed geophones and Explosive Packages (EPs) of the Lunar Seismic Profiling Experiment (LSPE) or the Lunar Roving Vehicle (LRV) at major sampling stops. Our results were integrated into a new LROC NAC‐based Apollo 17 Traverse Map and also used to generate a series of large‐scale maps of all nine traverse stations and of the ALSEP area. In addition, we provide crater measurements, profiles of the navigated traverse paths, and improved ranges of the sources and receivers of the active seismic experiment LSPE

    Robust Modular Feature-Based Terrain-Aided Visual Navigation and Mapping

    Get PDF
    The visual feature-based Terrain-Aided Navigation (TAN) system presented in this thesis addresses the problem of constraining inertial drift introduced into the location estimate of Unmanned Aerial Vehicles (UAVs) in GPS-denied environment. The presented TAN system utilises salient visual features representing semantic or human-interpretable objects (roads, forest and water boundaries) from onboard aerial imagery and associates them to a database of reference features created a-priori, through application of the same feature detection algorithms to satellite imagery. Correlation of the detected features with the reference features via a series of the robust data association steps allows a localisation solution to be achieved with a finite absolute bound precision defined by the certainty of the reference dataset. The feature-based Visual Navigation System (VNS) presented in this thesis was originally developed for a navigation application using simulated multi-year satellite image datasets. The extension of the system application into the mapping domain, in turn, has been based on the real (not simulated) flight data and imagery. In the mapping study the full potential of the system, being a versatile tool for enhancing the accuracy of the information derived from the aerial imagery has been demonstrated. Not only have the visual features, such as road networks, shorelines and water bodies, been used to obtain a position ’fix’, they have also been used in reverse for accurate mapping of vehicles detected on the roads into an inertial space with improved precision. Combined correction of the geo-coding errors and improved aircraft localisation formed a robust solution to the defense mapping application. A system of the proposed design will provide a complete independent navigation solution to an autonomous UAV and additionally give it object tracking capability

    Heritage Recording and 3D Modeling with Photogrammetry and 3D Scanning

    Get PDF
    The importance of landscape and heritage recording and documentation with optical remote sensing sensors is well recognized at international level. The continuous development of new sensors, data capture methodologies and multi-resolution 3D representations, contributes significantly to the digital 3D documentation, mapping, conservation and representation of landscapes and heritages and to the growth of research in this field. This article reviews the actual optical 3D measurement sensors and 3D modeling techniques, with their limitations and potentialities, requirements and specifications. Examples of 3D surveying and modeling of heritage sites and objects are also shown throughout the paper

    The Mesoamerican Corpus of Formative Period Art and Writing

    Get PDF
    This project explores the origins and development of the first writing in the New World by constructing a comprehensive database of Formative period, 1500-400 BCE, iconography and a suite of database-driven digital tools. In collaboration with two of the largest repositories of Formative period Mesoamerican art in Mexico, the project integrates the work of archaeologists, art historians, and scientific computing specialists to plan and begin the production of a database, digital assets, and visual search software that permit the visualization of spatial, chronological, and contextual relationships among iconographic and archaeological datasets. These resources will eventually support mobile and web based applications that allow for the search, comparison, and analysis of a corpus of material currently only partially documented. The start-up phase will generate a functional prototype database, project website, wireframe user interfaces, and a report summarizing project development
    • 

    corecore