565 research outputs found

    Deep learning in remote sensing: a review

    Get PDF
    Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    Review on Active and Passive Remote Sensing Techniques for Road Extraction

    Get PDF
    Digital maps of road networks are a vital part of digital cities and intelligent transportation. In this paper, we provide a comprehensive review on road extraction based on various remote sensing data sources, including high-resolution images, hyperspectral images, synthetic aperture radar images, and light detection and ranging. This review is divided into three parts. Part 1 provides an overview of the existing data acquisition techniques for road extraction, including data acquisition methods, typical sensors, application status, and prospects. Part 2 underlines the main road extraction methods based on four data sources. In this section, road extraction methods based on different data sources are described and analysed in detail. Part 3 presents the combined application of multisource data for road extraction. Evidently, different data acquisition techniques have unique advantages, and the combination of multiple sources can improve the accuracy of road extraction. The main aim of this review is to provide a comprehensive reference for research on existing road extraction technologies.Peer reviewe

    Analysis of Polarimetric Synthetic Aperture Radar and Passive Visible Light Polarimetric Imaging Data Fusion for Remote Sensing Applications

    Get PDF
    The recent launch of spaceborne (TerraSAR-X, RADARSAT-2, ALOS-PALSAR, RISAT) and airborne (SIRC, AIRSAR, UAVSAR, PISAR) polarimetric radar sensors, with capability of imaging through day and night in almost all weather conditions, has made polarimetric synthetic aperture radar (PolSAR) image interpretation and analysis an active area of research. PolSAR image classification is sensitive to object orientation and scattering properties. In recent years, significant work has been done in many areas including agriculture, forestry, oceanography, geology, terrain analysis. Visible light passive polarimetric imaging has also emerged as a powerful tool in remote sensing for enhanced information extraction. The intensity image provides information on materials in the scene while polarization measurements capture surface features, roughness, and shading, often uncorrelated with the intensity image. Advantages of visible light polarimetric imaging include high dynamic range of polarimetric signatures and being comparatively straightforward to build and calibrate. This research is about characterization and analysis of the basic scattering mechanisms for information fusion between PolSAR and passive visible light polarimetric imaging. Relationships between these two modes of imaging are established using laboratory measurements and image simulations using the Digital Image and Remote Sensing Image Generation (DIRSIG) tool. A novel low cost laboratory based S-band (2.4GHz) PolSAR instrument is developed that is capable of capturing 4 channel fully polarimetric SAR image data. Simple radar targets are formed and system calibration is performed in terms of radar cross-section. Experimental measurements are done using combination of the PolSAR instrument with visible light polarimetric imager for scenes capturing basic scattering mechanisms for phenomenology studies. The three major scattering mechanisms studied in this research include single, double and multiple bounce. Single bounce occurs from flat surfaces like lakes, rivers, bare soil, and oceans. Double bounce can be observed from two adjacent surfaces where one horizontal flat surface is near a vertical surface such as buildings and other vertical structures. Randomly oriented scatters in homogeneous media produce a multiple bounce scattering effect which occurs in forest canopies and vegetated areas. Relationships between Pauli color components from PolSAR and Degree of Linear Polarization (DOLP) from passive visible light polarimetric imaging are established using real measurements. Results show higher values of the red channel in Pauli color image (|HH-VV|) correspond to high DOLP from double bounce effect. A novel information fusion technique is applied to combine information from the two modes. In this research, it is demonstrated that the Degree of Linear Polarization (DOLP) from passive visible light polarimetric imaging can be used for separation of the classes in terms of scattering mechanisms from the PolSAR data. The separation of these three classes in terms of the scattering mechanisms has its application in the area of land cover classification and anomaly detection. The fusion of information from these particular two modes of imaging, i.e. PolSAR and passive visible light polarimetric imaging, is a largely unexplored area in remote sensing and the main challenge in this research is to identify areas and scenarios where information fusion between the two modes is advantageous for separation of the classes in terms of scattering mechanisms relative to separation achieved with only PolSAR

    Hyperspectral Remote Sensing Benchmark Database for Oil Spill Detection with an Isolation Forest-Guided Unsupervised Detector

    Full text link
    Oil spill detection has attracted increasing attention in recent years since marine oil spill accidents severely affect environments, natural resources, and the lives of coastal inhabitants. Hyperspectral remote sensing images provide rich spectral information which is beneficial for the monitoring of oil spills in complex ocean scenarios. However, most of the existing approaches are based on supervised and semi-supervised frameworks to detect oil spills from hyperspectral images (HSIs), which require a huge amount of effort to annotate a certain number of high-quality training sets. In this study, we make the first attempt to develop an unsupervised oil spill detection method based on isolation forest for HSIs. First, considering that the noise level varies among different bands, a noise variance estimation method is exploited to evaluate the noise level of different bands, and the bands corrupted by severe noise are removed. Second, kernel principal component analysis (KPCA) is employed to reduce the high dimensionality of the HSIs. Then, the probability of each pixel belonging to one of the classes of seawater and oil spills is estimated with the isolation forest, and a set of pseudo-labeled training samples is automatically produced using the clustering algorithm on the detected probability. Finally, an initial detection map can be obtained by performing the support vector machine (SVM) on the dimension-reduced data, and then, the initial detection result is further optimized with the extended random walker (ERW) model so as to improve the detection accuracy of oil spills. Experiments on airborne hyperspectral oil spill data (HOSD) created by ourselves demonstrate that the proposed method obtains superior detection performance with respect to other state-of-the-art detection approaches

    More Diverse Means Better: Multimodal Deep Learning Meets Remote Sensing Imagery Classification

    Full text link
    Classification and identification of the materials lying over or beneath the Earth's surface have long been a fundamental but challenging research topic in geoscience and remote sensing (RS) and have garnered a growing concern owing to the recent advancements of deep learning techniques. Although deep networks have been successfully applied in single-modality-dominated classification tasks, yet their performance inevitably meets the bottleneck in complex scenes that need to be finely classified, due to the limitation of information diversity. In this work, we provide a baseline solution to the aforementioned difficulty by developing a general multimodal deep learning (MDL) framework. In particular, we also investigate a special case of multi-modality learning (MML) -- cross-modality learning (CML) that exists widely in RS image classification applications. By focusing on "what", "where", and "how" to fuse, we show different fusion strategies as well as how to train deep networks and build the network architecture. Specifically, five fusion architectures are introduced and developed, further being unified in our MDL framework. More significantly, our framework is not only limited to pixel-wise classification tasks but also applicable to spatial information modeling with convolutional neural networks (CNNs). To validate the effectiveness and superiority of the MDL framework, extensive experiments related to the settings of MML and CML are conducted on two different multimodal RS datasets. Furthermore, the codes and datasets will be available at https://github.com/danfenghong/IEEE_TGRS_MDL-RS, contributing to the RS community

    Remote Sensing for Non‐Technical Survey

    Get PDF
    This chapter describes the research activities of the Royal Military Academy on remote sensing applied to mine action. Remote sensing can be used to detect specific features that could lead to the suspicion of the presence, or absence, of mines. Work on the automatic detection of trenches and craters is presented here. Land cover can be extracted and is quite useful to help mine action. We present here a classification method based on Gabor filters. The relief of a region helps analysts to understand where mines could have been laid. Methods to be a digital terrain model from a digital surface model are explained. The special case of multi‐spectral classification is also addressed in this chapter. Discussion about data fusion is also given. Hyper‐spectral data are also addressed with a change detection method. Synthetic aperture radar data and its fusion with optical data have been studied. Radar interferometry and polarimetry are also addressed

    Crop monitoring and yield estimation using polarimetric SAR and optical satellite data in southwestern Ontario

    Get PDF
    Optical satellite data have been proven as an efficient source to extract crop information and monitor crop growth conditions over large areas. In local- to subfield-scale crop monitoring studies, both high spatial resolution and high temporal resolution of the image data are important. However, the acquisition of optical data is limited by the constant contamination of clouds in cloudy areas. This thesis explores the potential of polarimetric Synthetic Aperture Radar (SAR) satellite data and the spatio-temporal data fusion approach in crop monitoring and yield estimation applications in southwestern Ontario. Firstly, the sensitivity of 16 parameters derived from C-band Radarsat-2 polarimetric SAR data to crop height and fractional vegetation cover (FVC) was investigated. The results show that the SAR backscatters are affected by many factors unrelated to the crop canopy such as the incidence angle and the soil background and the degree of sensitivity varies with the crop types, growing stages, and the polarimetric SAR parameters. Secondly, the Minimum Noise Fraction (MNF) transformation, for the first time, was applied to multitemporal Radarsat-2 polarimetric SAR data in cropland area mapping based on the random forest classifier. An overall classification accuracy of 95.89% was achieved using the MNF transformation of the multi-temporal coherency matrix acquired from July to November. Then, a spatio-temporal data fusion method was developed to generate Normalized Difference Vegetation Index (NDVI) time series with both high spatial and high temporal resolution in heterogeneous regions using Landsat and MODIS imagery. The proposed method outperforms two other widely used methods. Finally, an improved crop phenology detection method was proposed, and the phenology information was then forced into the Simple Algorithm for Yield Estimation (SAFY) model to estimate crop biomass and yield. Compared with the SAFY model without forcing the remotely sensed phenology and a simple light use efficiency (LUE) model, the SAFY incorporating the remotely sensed phenology can improve the accuracy of biomass estimation by about 4% in relative Root Mean Square Error (RRMSE). The studies in this thesis improve the ability to monitor crop growth status and production at subfield scale

    An Analysis of multimodal sensor fusion for target detection in an urban environment

    Get PDF
    This work makes a compelling case for simulation as an attractive tool in designing cutting-edge remote sensing systems to generate the sheer volume of data required for a reasonable trade study. The generalized approach presented here allows multimodal system designers to tailor target and sensor parameters for their particular scenarios of interest via synthetic image generation tools, ensuring that resources are best allocated while sensors are still in the design phase. Additionally, sensor operators can use the customizable process showcased here to optimize image collection parameters for existing sensors. In the remote sensing community, polarimetric capabilities are often seen as a tool without a widely accepted mission. This study proposes incorporating a polarimetric and spectral sensor in a multimodal architecture to improve target detection performance in an urban environment. Two novel multimodal fusion algorithms are proposed--one for the pixel level, and another for the decision level. A synthetic urban scene is rendered for 355 unique combinations of illumination condition and sensor viewing geometry with the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model, and then validated to ensure the presence of enough background clutter. The utility of polarimetric information is shown to vary with the sun-target-sensor geometry, and the decision fusion algorithm is shown to generally outperform the pixel fusion algorithm. The results essentially suggest that polarimetric information may be leveraged to restore the capabilities of a spectral sensor if forced to image under less than ideal circumstances
    • 

    corecore