384 research outputs found

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    Semi-automatic city model extraction from tri-stereoscopic VHR satellite imagery

    Get PDF
    In this paper a methodology and results of semi-automatic city DSM extraction from an Ikonos triplet, is introduced. Built-up areas are known as being complex for photogrammetric purposes, mainly because of the steep changes in elevation caused by buildings and urban features. To make surface model extraction more robust and to cope with the specific problems of height displacement, concealed areas and shadow, a multi-image based approach is followed. For the VHR tri-stereoscopic study an area extending from the centre of Istanbul to the urban fringe is chosen. Research concentrates on the development of methods to optimize the extraction of a surface model from the bundled Ikonos triplet over an urban area, without manual plotting of buildings. Optimal methods need to be found to improve the radiometry and geometric alignment of the multi-temporal imagery, to optimize the semi-automatical derivation of DSMs from an urban environment and to enhance the quality of the resulting surface model and especially to reduce smoothing effects by applying spatial filters

    Development of inventory datasets through remote sensing and direct observation data for earthquake loss estimation

    Get PDF
    This report summarizes the lessons learnt in extracting exposure information for the three study sites, Thessaloniki, Vienna and Messina that were addressed in SYNER-G. Fine scale information on exposed elements that for SYNER-G include buildings, civil engineering works and population, is one of the variables used to quantify risk. Collecting data and creating exposure inventories is a very time-demanding job and all possible data-gathering techniques should be used to address the data shortcoming problem. This report focuses on combining direct observation and remote sensing data for the development of exposure models for seismic risk assessment. In this report a summary of the methods for collecting, processing and archiving inventory datasets is provided in Chapter 2. Chapter 3 deals with the integration of different data sources for optimum inventory datasets, whilst Chapters 4, 5 and 6 provide some case studies where combinations between direct observation and remote sensing have been used. The cities of Vienna (Austria), Thessaloniki (Greece) and Messina (Italy) have been chosen to test the proposed approaches.JRC.G.5-European laboratory for structural assessmen

    GeoAI-enhanced Techniques to Support Geographical Knowledge Discovery from Big Geospatial Data

    Get PDF
    abstract: Big data that contain geo-referenced attributes have significantly reformed the way that I process and analyze geospatial data. Compared with the expected benefits received in the data-rich environment, more data have not always contributed to more accurate analysis. “Big but valueless” has becoming a critical concern to the community of GIScience and data-driven geography. As a highly-utilized function of GeoAI technique, deep learning models designed for processing geospatial data integrate powerful computing hardware and deep neural networks into various dimensions of geography to effectively discover the representation of data. However, limitations of these deep learning models have also been reported when People may have to spend much time on preparing training data for implementing a deep learning model. The objective of this dissertation research is to promote state-of-the-art deep learning models in discovering the representation, value and hidden knowledge of GIS and remote sensing data, through three research approaches. The first methodological framework aims to unify varied shadow into limited number of patterns, with the convolutional neural network (CNNs)-powered shape classification, multifarious shadow shapes with a limited number of representative shadow patterns for efficient shadow-based building height estimation. The second research focus integrates semantic analysis into a framework of various state-of-the-art CNNs to support human-level understanding of map content. The final research approach of this dissertation focuses on normalizing geospatial domain knowledge to promote the transferability of a CNN’s model to land-use/land-cover classification. This research reports a method designed to discover detailed land-use/land-cover types that might be challenging for a state-of-the-art CNN’s model that previously performed well on land-cover classification only.Dissertation/ThesisDoctoral Dissertation Geography 201

    Very High Resolution (VHR) Satellite Imagery: Processing and Applications

    Get PDF
    Recently, growing interest in the use of remote sensing imagery has appeared to provide synoptic maps of water quality parameters in coastal and inner water ecosystems;, monitoring of complex land ecosystems for biodiversity conservation; precision agriculture for the management of soils, crops, and pests; urban planning; disaster monitoring, etc. However, for these maps to achieve their full potential, it is important to engage in periodic monitoring and analysis of multi-temporal changes. In this context, very high resolution (VHR) satellite-based optical, infrared, and radar imaging instruments provide reliable information to implement spatially-based conservation actions. Moreover, they enable observations of parameters of our environment at greater broader spatial and finer temporal scales than those allowed through field observation alone. In this sense, recent very high resolution satellite technologies and image processing algorithms present the opportunity to develop quantitative techniques that have the potential to improve upon traditional techniques in terms of cost, mapping fidelity, and objectivity. Typical applications include multi-temporal classification, recognition and tracking of specific patterns, multisensor data fusion, analysis of land/marine ecosystem processes and environment monitoring, etc. This book aims to collect new developments, methodologies, and applications of very high resolution satellite data for remote sensing. The works selected provide to the research community the most recent advances on all aspects of VHR satellite remote sensing

    Shadow detection from very high resoluton satellite image using grabcut segmentation and ratio-band algorithms

    Get PDF
    Very-High-Resolution (VHR) satellite imagery is a powerful source of data for detecting and extracting information about urban constructions. Shadow in the VHR satellite imageries provides vital information on urban construction forms, illumination direction, and the spatial distribution of the objects that can help to further understanding of the built environment. However, to extract shadows, the automated detection of shadows from images must be accurate. This paper reviews current automatic approaches that have been used for shadow detection from VHR satellite images and comprises two main parts. In the first part, shadow concepts are presented in terms of shadow appearance in the VHR satellite imageries, current shadow detection methods, and the usefulness of shadow detection in urban environments. In the second part, we adopted two approaches which are considered current state-of-the-art shadow detection, and segmentation algorithms using WorldView-3 and Quickbird images. In the first approach, the ratios between the NIR and visible bands were computed on a pixel-by-pixel basis, which allows for disambiguation between shadows and dark objects. To obtain an accurate shadow candidate map, we further refine the shadow map after applying the ratio algorithm on the Quickbird image. The second selected approach is the GrabCut segmentation approach for examining its performance in detecting the shadow regions of urban objects using the true colour image from WorldView-3. Further refinement was applied to attain a segmented shadow map. Although the detection of shadow regions is a very difficult task when they are derived from a VHR satellite image that comprises a visible spectrum range (RGB true colour), the results demonstrate that the detection of shadow regions in the WorldView-3 image is a reasonable separation from other objects by applying the GrabCut algorithm. In addition, the derived shadow map from the Quickbird image indicates significant performance of the ratio algorithm. The differences in the characteristics of the two satellite imageries in terms of spatial and spectral resolution can play an important role in the estimation and detection of the shadow of urban objects

    Automated Building Information Extraction and Evaluation from High-resolution Remotely Sensed Data

    Get PDF
    The two-dimensional (2D) footprints and three-dimensional (3D) structures of buildings are of great importance to city planning, natural disaster management, and virtual environmental simulation. As traditional manual methodologies for collecting 2D and 3D building information are often both time consuming and costly, automated methods are required for efficient large area mapping. It is challenging to extract building information from remotely sensed data, considering the complex nature of urban environments and their associated intricate building structures. Most 2D evaluation methods are focused on classification accuracy, while other dimensions of extraction accuracy are ignored. To assess 2D building extraction methods, a multi-criteria evaluation system has been designed. The proposed system consists of matched rate, shape similarity, and positional accuracy. Experimentation with four methods demonstrates that the proposed multi-criteria system is more comprehensive and effective, in comparison with traditional accuracy assessment metrics. Building height is critical for building 3D structure extraction. As data sources for height estimation, digital surface models (DSMs) that are derived from stereo images using existing software typically provide low accuracy results in terms of rooftop elevations. Therefore, a new image matching method is proposed by adding building footprint maps as constraints. Validation demonstrates that the proposed matching method can estimate building rooftop elevation with one third of the error encountered when using current commercial software. With an ideal input DSM, building height can be estimated by the elevation contrast inside and outside a building footprint. However, occlusions and shadows cause indistinct building edges in the DSMs generated from stereo images. Therefore, a “building-ground elevation difference model” (EDM) has been designed, which describes the trend of the elevation difference between a building and its neighbours, in order to find elevation values at bare ground. Experiments using this novel approach report that estimated building height with 1.5m residual, which out-performs conventional filtering methods. Finally, 3D buildings are digitally reconstructed and evaluated. Current 3D evaluation methods did not present the difference between 2D and 3D evaluation methods well; traditionally, wall accuracy is ignored. To address these problems, this thesis designs an evaluation system with three components: volume, surface, and point. As such, the resultant multi-criteria system provides an improved evaluation method for building reconstruction

    ‘Looting marks’ in space-borne SAR imagery: measuring rates of archaeological looting in Apamea (Syria) with TerraSAR-X Staring Spotlight

    Get PDF
    In archaeological remote sensing, space-borne Synthetic Aperture Radar (SAR) has not been used so far to monitor ‘looting’ (i.e. illegal excavations in heritage sites) mainly because of the spatial resolution of SAR images, typically not comparable to the ground dimensions of looting features. This paper explores the potential of the new TerraSAR-X beam mode Staring Spotlight (ST) to investigate looting within a workflow of radar backscattering change detection. A bespoke time series of five single polarisation, ascending mode, ST scenes with an unprecedented azimuth resolution of 0.24 m was acquired over the archaeological site of Apamea in western Syria, from October 2014 to June 2015 with a regular sampling of one image every two months. Formerly included in the Tentative List of UNESCO, the site has been heavily looted from at least early 2012 to May 2014, as confirmed by Google Earth Very High Resolution (VHR) optical imagery. Building upon the theory of SAR imaging, we develop a novel conceptual model of ‘looting marks’, identify marks due to occurrence of new looting and discriminate them from alteration (e.g. filling) of pre-existing looting holes. ‘Looting marks’ appear as distinctive patterns of shadow and layover which are visible in the ground-range reprojected ST image and generated by the morphology of the holes. The recognition of looting marks within ratio maps of radar backscatter (σ0) between consecutive ST scenes allows quantification of the magnitude, spatial distribution and rates of looting activities. In agreement with the estimates based on Google Earth imagery, the ST acquired in October 2014 shows that ~ 45% of the site was looted. In the following eight months new looting happened locally, with holes mainly dug along the margins of the already looted areas. Texture values of ~ 0.31 clearly distinguish these holes from the unaltered, bare ground nearby. Hot spots of change are identified based on the temporal variability of σ0, and colour composites indicate where repeated looting and alteration of existing holes occurred. Most looting marks are observed north of the two main Roman decumani. Looting intensified almost steadily from December 2014, with over 1500 new marks in February–April 2015. The estimated rates of looting increased from 214 looting marks/month in October–December 2014 to over 780 marks/month in April–June 2015, and numerically express the dynamic nature of the phenomenon to which Apamea is still exposed. The method of identifying looting marks in VHR radar images therefore proves a reliable opportunity for archaeologists and image analysts to measure remotely the scale of looting and monitor its temporal evolution
    corecore