10,025 research outputs found

    Towards Automatic SAR-Optical Stereogrammetry over Urban Areas using Very High Resolution Imagery

    Full text link
    In this paper we discuss the potential and challenges regarding SAR-optical stereogrammetry for urban areas, using very-high-resolution (VHR) remote sensing imagery. Since we do this mainly from a geometrical point of view, we first analyze the height reconstruction accuracy to be expected for different stereogrammetric configurations. Then, we propose a strategy for simultaneous tie point matching and 3D reconstruction, which exploits an epipolar-like search window constraint. To drive the matching and ensure some robustness, we combine different established handcrafted similarity measures. For the experiments, we use real test data acquired by the Worldview-2, TerraSAR-X and MEMPHIS sensors. Our results show that SAR-optical stereogrammetry using VHR imagery is generally feasible with 3D positioning accuracies in the meter-domain, although the matching of these strongly hetereogeneous multi-sensor data remains very challenging. Keywords: Synthetic Aperture Radar (SAR), optical images, remote sensing, data fusion, stereogrammetr

    Airborne photogrammetry and LIDAR for DSM extraction and 3D change detection over an urban area : a comparative study

    Get PDF
    A digital surface model (DSM) extracted from stereoscopic aerial images, acquired in March 2000, is compared with a DSM derived from airborne light detection and ranging (lidar) data collected in July 2009. Three densely built-up study areas in the city centre of Ghent, Belgium, are selected, each covering approximately 0.4 km(2). The surface models, generated from the two different 3D acquisition methods, are compared qualitatively and quantitatively as to what extent they are suitable in modelling an urban environment, in particular for the 3D reconstruction of buildings. Then the data sets, which are acquired at two different epochs t(1) and t(2), are investigated as to what extent 3D (building) changes can be detected and modelled over the time interval. A difference model, generated by pixel-wise subtracting of both DSMs, indicates changes in elevation. Filters are proposed to differentiate 'real' building changes from false alarms provoked by model noise, outliers, vegetation, etc. A final 3D building change model maps all destructed and newly constructed buildings within the time interval t(2) - t(1). Based on the change model, the surface and volume of the building changes can be quantified

    Learning Aerial Image Segmentation from Online Maps

    Get PDF
    This study deals with semantic segmentation of high-resolution (aerial) images where a semantic class label is assigned to each pixel via supervised classification as a basis for automatic map generation. Recently, deep convolutional neural networks (CNNs) have shown impressive performance and have quickly become the de-facto standard for semantic segmentation, with the added benefit that task-specific feature design is no longer necessary. However, a major downside of deep learning methods is that they are extremely data-hungry, thus aggravating the perennial bottleneck of supervised classification, to obtain enough annotated training data. On the other hand, it has been observed that they are rather robust against noise in the training labels. This opens up the intriguing possibility to avoid annotating huge amounts of training data, and instead train the classifier from existing legacy data or crowd-sourced maps which can exhibit high levels of noise. The question addressed in this paper is: can training with large-scale, publicly available labels replace a substantial part of the manual labeling effort and still achieve sufficient performance? Such data will inevitably contain a significant portion of errors, but in return virtually unlimited quantities of it are available in larger parts of the world. We adapt a state-of-the-art CNN architecture for semantic segmentation of buildings and roads in aerial images, and compare its performance when using different training data sets, ranging from manually labeled, pixel-accurate ground truth of the same city to automatic training data derived from OpenStreetMap data from distant locations. We report our results that indicate that satisfying performance can be obtained with significantly less manual annotation effort, by exploiting noisy large-scale training data.Comment: Published in IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSIN

    Fisheye Photogrammetry to Survey Narrow Spaces in Architecture and a Hypogea Environment

    Get PDF
    Nowadays, the increasing computation power of commercial grade processors has actively led to a vast spreading of image-based reconstruction software as well as its application in different disciplines. As a result, new frontiers regarding the use of photogrammetry in a vast range of investigation activities are being explored. This paper investigates the implementation of fisheye lenses in non-classical survey activities along with the related problematics. Fisheye lenses are outstanding because of their large field of view. This characteristic alone can be a game changer in reducing the amount of data required, thus speeding up the photogrammetric process when needed. Although they come at a cost, field of view (FOV), speed and manoeuvrability are key to the success of those optics as shown by two of the presented case studies: the survey of a very narrow spiral staircase located in the Duomo di Milano and the survey of a very narrow hypogea structure in Rome. A third case study, which deals with low-cost sensors, shows the metric evaluation of a commercial spherical camera equipped with fisheye lenses

    VIRTUAL TOURS FOR SMART CITIES: A COMPARATIVE PHOTOGRAMMETRIC APPROACH FOR LOCATING HOT-SPOTS IN SPHERICAL PANORAMAS

    Get PDF
    The paper aims to investigate the possibilities of using the panorama-based VR to survey data related to that set of activities for planning and management of urban areas, belonging to the Smart Cities strategies. The core of our workflow is to facilitate the visualization of the data produced by the infrastructures of the Smart Cities. A graphical interface based on spherical panoramas, instead of complex three-dimensional could help the user/citizen of the city to better know the operation related to control units spread in the urban area. From a methodological point of view three different kind of spherical panorama acquisition has been tested and compared in order to identify a semi-automatic procedure for locating homologous points on two or more spherical images starting from a point cloud obtained from the same images. The points thus identified allow to quickly identify the same hot-spot on multiple images simultaneously. The comparison shows how all three systems have proved to be useful for the purposes of the research but only one has proved to be reliable from a geometric point of view to identify the locators useful for the construction of the virtual tour
    corecore