142 research outputs found

    Buildings Detection in VHR SAR Images Using Fully Convolution Neural Networks

    Get PDF
    This paper addresses the highly challenging problem of automatically detecting man-made structures especially buildings in very high resolution (VHR) synthetic aperture radar (SAR) images. In this context, the paper has two major contributions: Firstly, it presents a novel and generic workflow that initially classifies the spaceborne TomoSAR point clouds − - generated by processing VHR SAR image stacks using advanced interferometric techniques known as SAR tomography (TomoSAR) − - into buildings and non-buildings with the aid of auxiliary information (i.e., either using openly available 2-D building footprints or adopting an optical image classification scheme) and later back project the extracted building points onto the SAR imaging coordinates to produce automatic large-scale benchmark labelled (buildings/non-buildings) SAR datasets. Secondly, these labelled datasets (i.e., building masks) have been utilized to construct and train the state-of-the-art deep Fully Convolution Neural Networks with an additional Conditional Random Field represented as a Recurrent Neural Network to detect building regions in a single VHR SAR image. Such a cascaded formation has been successfully employed in computer vision and remote sensing fields for optical image classification but, to our knowledge, has not been applied to SAR images. The results of the building detection are illustrated and validated over a TerraSAR-X VHR spotlight SAR image covering approximately 39 km2 ^2 − - almost the whole city of Berlin − - with mean pixel accuracies of around 93.84%Comment: Accepted publication in IEEE TGR

    Building change detection in Multitemporal very high resolution SAR images

    Get PDF

    Development of inventory datasets through remote sensing and direct observation data for earthquake loss estimation

    Get PDF
    This report summarizes the lessons learnt in extracting exposure information for the three study sites, Thessaloniki, Vienna and Messina that were addressed in SYNER-G. Fine scale information on exposed elements that for SYNER-G include buildings, civil engineering works and population, is one of the variables used to quantify risk. Collecting data and creating exposure inventories is a very time-demanding job and all possible data-gathering techniques should be used to address the data shortcoming problem. This report focuses on combining direct observation and remote sensing data for the development of exposure models for seismic risk assessment. In this report a summary of the methods for collecting, processing and archiving inventory datasets is provided in Chapter 2. Chapter 3 deals with the integration of different data sources for optimum inventory datasets, whilst Chapters 4, 5 and 6 provide some case studies where combinations between direct observation and remote sensing have been used. The cities of Vienna (Austria), Thessaloniki (Greece) and Messina (Italy) have been chosen to test the proposed approaches.JRC.G.5-European laboratory for structural assessmen

    Damage mapping after the 2017 Puebla Earthquake in Mexico using high-resolution Alos2 Palsar2 data

    Full text link
    On September 19, 2017, the Mw7.1 Puebla Earthquake caused significant destruction in several cities in central Mexico. In this paper, two pre- and one post-event ALOS2-PALSAR2 data were used to detect the damaged area around Izucar de Matamoros town in Mexico. First, we identify the built-up areas using pre-event data. Second, we evaluate the earthquake-induced damage areas using an RGB color-coded image constructed from the pre- and co-event coherence images. Our analysis showed that the green and red bands display a great potential to discriminate the damaged areas.Accepted manuscrip

    A Novel Technique Based on the Combination of Labeled Co-Occurrence Matrix and Variogram for the Detection of Built-up Areas in High-Resolution SAR Images

    Get PDF
    Interests in synthetic aperture radar (SAR) data analysis is driven by the constantly increased spatial resolutions of the acquired images, where the geometries of scene objects can be better defined than in lower resolution data. This paper addresses the problem of the built-up areas extraction in high-resolution (HR) SAR images, which can provide a wealth of information to characterize urban environments. Strong backscattering behavior is one of the distinct characteristics of built-up areas in a SAR image. However, in practical applications, only a small portion of pixels characterizing the built-up areas appears bright. Thus, specific texture measures should be considered for identifying these areas. This paper presents a novel texture measure by combining the proposed labeled co-occurrence matrix technique with the specific spatial variability structure of the considered land-cover type in the fuzzy set theory. The spatial variability is analyzed by means of variogram, which reflects the spatial correlation or non-similarity associated with a particular terrain surface. The derived parameters from the variograms are used to establish fuzzy functions to characterize the built-up class and non built-up class, separately. The proposed technique was tested on TerraSAR-X images acquired of Nanjing (China) and Barcelona (Spain), and on a COSMO-SkyMed image acquired of Hangzhou (China). The obtained classification accuracies point out the effectiveness of the proposed technique in identifying and detecting built-up areas

    A Framework for SAR-Optical Stereogrammetry over Urban Areas

    Get PDF
    Currently, numerous remote sensing satellites provide a huge volume of diverse earth observation data. As these data show different features regarding resolution, accuracy, coverage, and spectral imaging ability, fusion techniques are required to integrate the different properties of each sensor and produce useful information. For example, synthetic aperture radar (SAR) data can be fused with optical imagery to produce 3D information using stereogrammetric methods. The main focus of this study is to investigate the possibility of applying a stereogrammetry pipeline to very-high-resolution (VHR) SAR-optical image pairs. For this purpose, the applicability of semi-global matching is investigated in this unconventional multi-sensor setting. To support the image matching by reducing the search space and accelerating the identification of correct, reliable matches, the possibility of establishing an epipolarity constraint for VHR SAR-optical image pairs is investigated as well. In addition, it is shown that the absolute geolocation accuracy of VHR optical imagery with respect to VHR SAR imagery such as provided by TerraSAR-X can be improved by a multi-sensor block adjustment formulation based on rational polynomial coefficients. Finally, the feasibility of generating point clouds with a median accuracy of about 2m is demonstrated and confirms the potential of 3D reconstruction from SAR-optical image pairs over urban areas.Comment: This is the pre-acceptance version, to read the final version, please go to ISPRS Journal of Photogrammetry and Remote Sensing on ScienceDirec

    Automated Building Information Extraction and Evaluation from High-resolution Remotely Sensed Data

    Get PDF
    The two-dimensional (2D) footprints and three-dimensional (3D) structures of buildings are of great importance to city planning, natural disaster management, and virtual environmental simulation. As traditional manual methodologies for collecting 2D and 3D building information are often both time consuming and costly, automated methods are required for efficient large area mapping. It is challenging to extract building information from remotely sensed data, considering the complex nature of urban environments and their associated intricate building structures. Most 2D evaluation methods are focused on classification accuracy, while other dimensions of extraction accuracy are ignored. To assess 2D building extraction methods, a multi-criteria evaluation system has been designed. The proposed system consists of matched rate, shape similarity, and positional accuracy. Experimentation with four methods demonstrates that the proposed multi-criteria system is more comprehensive and effective, in comparison with traditional accuracy assessment metrics. Building height is critical for building 3D structure extraction. As data sources for height estimation, digital surface models (DSMs) that are derived from stereo images using existing software typically provide low accuracy results in terms of rooftop elevations. Therefore, a new image matching method is proposed by adding building footprint maps as constraints. Validation demonstrates that the proposed matching method can estimate building rooftop elevation with one third of the error encountered when using current commercial software. With an ideal input DSM, building height can be estimated by the elevation contrast inside and outside a building footprint. However, occlusions and shadows cause indistinct building edges in the DSMs generated from stereo images. Therefore, a “building-ground elevation difference model” (EDM) has been designed, which describes the trend of the elevation difference between a building and its neighbours, in order to find elevation values at bare ground. Experiments using this novel approach report that estimated building height with 1.5m residual, which out-performs conventional filtering methods. Finally, 3D buildings are digitally reconstructed and evaluated. Current 3D evaluation methods did not present the difference between 2D and 3D evaluation methods well; traditionally, wall accuracy is ignored. To address these problems, this thesis designs an evaluation system with three components: volume, surface, and point. As such, the resultant multi-criteria system provides an improved evaluation method for building reconstruction

    A Global Human Settlement Layer from optical high resolution imagery - Concept and first results

    Get PDF
    A general framework for processing of high and very-high resolution imagery for creating a Global Human Settlement Layer (GHSL) is presented together with a discussion on the results of the first operational test of the production workflow. The test involved the mapping of 24.3 millions of square kilometres of the Earth surface spread over four continents, corresponding to an estimated population of 1.3 billion of people in 2010. The resolution of the input image data ranges from 0.5 to 10 meters, collected by a heterogeneous set of platforms including satellite SPOT (2 and 5), CBERS-2B, RapidEye (2 and 4), WorldView (1 and 2), GeoEye-1, QuickBird-2, Ikonos-2, and airborne sensors. Several imaging modes were tested including panchromatic, multispectral and pan-sharpened images. A new fully automatic image information extraction, generalization and mosaic workflow is presented that is based on multiscale textural and morphological image features extraction. New image feature compression and optimization are introduced, together with new learning and classification techniques allowing for the processing of HR/VHR image data using low-resolution thematic layers as reference. A new systematic approach for quality control and validation allowing global spatial and thematic consistency checking is proposed and applied. The quality of the results are discussed by sensor, by band, by resolution, and eco-regions. Critical points, lessons learned and next steps are highlighted.JRC.G.2-Global security and crisis managemen
    • 

    corecore