470 research outputs found

    Deep learning in remote sensing: a review

    Get PDF
    Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    Assessment of multi-temporal, multi-sensor radar and ancillary spatial data for grasslands monitoring in Ireland using machine learning approaches

    Get PDF
    Accurate inventories of grasslands are important for studies of carbon dynamics, biodiversity conservation and agricultural management. For regions with persistent cloud cover the use of multi-temporal synthetic aperture radar (SAR) data provides an attractive solution for generating up-to-date inventories of grasslands. This is even more appealing considering the data that will be available from upcoming missions such as Sentinel-1 and ALOS-2. In this study, the performance of three machine learning algorithms; Random Forests (RF), Support Vector Machines (SVM) and the relatively underused Extremely Randomised Trees (ERT) is evaluated for discriminating between grassland types over two large heterogeneous areas of Ireland using multi-temporal, multi-sensor radar and ancillary spatial datasets. A detailed accuracy assessment shows the efficacy of the three algorithms to classify different types of grasslands. Overall accuracies ≄ 88.7% (with kappa coefficient of 0.87) were achieved for the single frequency classifications and maximum accuracies of 97.9% (kappa coefficient of 0.98) for the combined frequency classifications. For most datasets, the ERT classifier outperforms SVM and RF

    Remote Sensing for Non‐Technical Survey

    Get PDF
    This chapter describes the research activities of the Royal Military Academy on remote sensing applied to mine action. Remote sensing can be used to detect specific features that could lead to the suspicion of the presence, or absence, of mines. Work on the automatic detection of trenches and craters is presented here. Land cover can be extracted and is quite useful to help mine action. We present here a classification method based on Gabor filters. The relief of a region helps analysts to understand where mines could have been laid. Methods to be a digital terrain model from a digital surface model are explained. The special case of multi‐spectral classification is also addressed in this chapter. Discussion about data fusion is also given. Hyper‐spectral data are also addressed with a change detection method. Synthetic aperture radar data and its fusion with optical data have been studied. Radar interferometry and polarimetry are also addressed

    Mapping Mangrove Extent and Change: A Globally Applicable Approach

    Get PDF
    This study demonstrates a globally applicable method for monitoring mangrove forest extent at high spatial resolution. A 2010 mangrove baseline was classified for 16 study areas using a combination of ALOS PALSAR and Landsat composite imagery within a random forests classifier. A novel map-to-image change method was used to detect annual and decadal changes in extent using ALOS PALSAR/JERS-1 imagery. The map-to-image method presented makes fewer assumptions of the data than existing methods, is less sensitive to variation between scenes due to environmental factors (e.g., tide or soil moisture) and is able to automatically identify a change threshold. Change maps were derived from the 2010 baseline to 1996 using JERS-1 SAR and to 2007, 2008 and 2009 using ALOS PALSAR. This study demonstrated results for 16 known hotspots of mangrove change distributed globally, with a total mangrove area of 2,529,760 ha. The method was demonstrated to have accuracies consistently in excess of 90% (overall accuracy: 92.293.3%, kappa: 0.86) for mapping baseline extent. The accuracies of the change maps were more variable and were dependent upon the time period between images and number of change features. Total change from 1996 to 2010 was 204,850 ha (127,990 ha gain, 76,860 ha loss), with the highest gains observed in French Guiana (15,570 ha) and the highest losses observed in East Kalimantan, Indonesia (23,003 ha). Changes in mangrove extent were the consequence of both natural and anthropogenic drivers, yielding net increases or decreases in extent dependent upon the study site. These updated maps are of importance to the mangrove research community, particularly as the continual updating of the baseline with currently available and anticipated spaceborne sensors. It is recommended that mangrove baselines are updated on at least a 5-year interval to suit the requirements of policy makers

    Fusion of VNIR Optical and C-Band Polarimetric SAR Satellite Data for Accurate Detection of Temporal Changes in Vegetated Areas

    Get PDF
    In this paper, we propose a processing chain jointly employing Sentinel-1 and Sentinel-2 data, aiming to monitor changes in the status of the vegetation cover by integrating the four 10 m visible and near-infrared (VNIR) bands with the three red-edge (RE) bands of Sentinel-2. The latter approximately span the gap between red and NIR bands (700 nm–800 nm), with bandwidths of 15/20 nm and 20 m pixel spacing. The RE bands are sharpened to 10 m, following the hypersharpening protocol, which holds, unlike pansharpening, when the sharpening band is not unique. The resulting 10 m fusion product may be integrated with polarimetric features calculated from the Interferometric Wide (IW) Ground Range Detected (GRD) product of Sentinel-1, available at 10 m pixel spacing, before the fused data are analyzed for change detection. A key point of the proposed scheme is that the fusion of optical and synthetic aperture radar (SAR) data is accomplished at level of change, through modulation of the optical change feature, namely the difference in normalized area over (reflectance) curve (NAOC), calculated from the sharpened RE bands, by the polarimetric SAR change feature, achieved as the temporal ratio of polarimetric features, where the latter is the pixel ratio between the co-polar and the cross-polar channels. Hyper-sharpening of Sentinel-2 RE bands, calculation of NAOC and modulation-based integration of Sentinel-1 polarimetric change features are applied to multitemporal datasets acquired before and after a fire event, over Mount Serra, in Italy. The optical change feature captures variations in the content of chlorophyll. The polarimetric SAR temporal change feature describes depolarization effects and changes in volumetric scattering of canopies. Their fusion shows an increased ability to highlight changes in vegetation status. In a performance comparison achieved by means of receiver operating characteristic (ROC) curves, the proposed change feature-based fusion approach surpasses a traditional area-based approach and the normalized burned ratio (NBR) index, which is widespread in the detection of burnt vegetation

    Developing a Grassland Biomass Monitoring Tool Using a Time Series of Dual Polarimetric SAR and Optical Data

    Get PDF
    Grasslands are the most important ecosystem to humanity, as they are responsible for feeding that majority of the human population. These are also very large ecosystems; they cover approximately 40% of the surface of the earth (Loveland et al., 1998), making ground-based surveys for monitoring grassland health and productivity extremely time consuming. Remote sensing has the advantage of providing reliable and repeatable observations over large swaths of land; however, optical sensors exploiting the visible and near infrared regions of electromagnetic (EM) spectrum will be unable to collect information from the ground if clouds are present (Wang et al., 2009). Imaging radar sensors, the most common being synthetic aperture radar (SAR), have the advantage of being able to image the ground even during cloudy conditions. The longer wavelengths of EM energy used by the SAR sensor are able to penetrate clouds while shorter wavelength used by optical sensors are scattered. A grassland monitoring tool based on SAR imagery would have many advantages over an optical imagery system, especially when SAR data becomes widely available. To demonstrate the feasibility of grassland monitoring using SAR, this study experimented with a set of dual-polarimetric SAR imagery to extract several grassland biophysical parameters such as soil moisture, canopy moisture, and green grass biomass over the mixed grassland in southwestern Saskatchewan. Soil moisture was derived from these images using the simple Delta Index (Thoma et al., 2006) first developed for a sparsely vegetated landscape. The Delta Index was found to explain 80% of the variation in soil moisture, in this vegetated landscape. Canopy moisture was modeled using the water cloud model (Attema and Ulaby, 1978). This model has a similar explanatory power of R2 = 0.80. This study found that only the photosynthesizing green grass biomass had a significant relationship with the canopy moisture model. However, only about 40% of the variation in green grass biomass can be explained by canopy moisture alone. The cross-polarized ratio developed from the dual polarimetric images was found to reflect the plant form diversity of the grassland. Biophysical parameters extracted from optical satellite imagery, Landsat-5 in the case of this study, were compared to those derived from the SAR images. This comparison revealed that the SAR images were superior in sensitivity to soil and canopy moisture. Optical imagery was found to be more sensitive to green canopy cover. An approach combining the results from both sensors showed an improvement in green grass biomass estimation (Adjusted R2 = 0.71)

    Analysis of Polarimetric Synthetic Aperture Radar and Passive Visible Light Polarimetric Imaging Data Fusion for Remote Sensing Applications

    Get PDF
    The recent launch of spaceborne (TerraSAR-X, RADARSAT-2, ALOS-PALSAR, RISAT) and airborne (SIRC, AIRSAR, UAVSAR, PISAR) polarimetric radar sensors, with capability of imaging through day and night in almost all weather conditions, has made polarimetric synthetic aperture radar (PolSAR) image interpretation and analysis an active area of research. PolSAR image classification is sensitive to object orientation and scattering properties. In recent years, significant work has been done in many areas including agriculture, forestry, oceanography, geology, terrain analysis. Visible light passive polarimetric imaging has also emerged as a powerful tool in remote sensing for enhanced information extraction. The intensity image provides information on materials in the scene while polarization measurements capture surface features, roughness, and shading, often uncorrelated with the intensity image. Advantages of visible light polarimetric imaging include high dynamic range of polarimetric signatures and being comparatively straightforward to build and calibrate. This research is about characterization and analysis of the basic scattering mechanisms for information fusion between PolSAR and passive visible light polarimetric imaging. Relationships between these two modes of imaging are established using laboratory measurements and image simulations using the Digital Image and Remote Sensing Image Generation (DIRSIG) tool. A novel low cost laboratory based S-band (2.4GHz) PolSAR instrument is developed that is capable of capturing 4 channel fully polarimetric SAR image data. Simple radar targets are formed and system calibration is performed in terms of radar cross-section. Experimental measurements are done using combination of the PolSAR instrument with visible light polarimetric imager for scenes capturing basic scattering mechanisms for phenomenology studies. The three major scattering mechanisms studied in this research include single, double and multiple bounce. Single bounce occurs from flat surfaces like lakes, rivers, bare soil, and oceans. Double bounce can be observed from two adjacent surfaces where one horizontal flat surface is near a vertical surface such as buildings and other vertical structures. Randomly oriented scatters in homogeneous media produce a multiple bounce scattering effect which occurs in forest canopies and vegetated areas. Relationships between Pauli color components from PolSAR and Degree of Linear Polarization (DOLP) from passive visible light polarimetric imaging are established using real measurements. Results show higher values of the red channel in Pauli color image (|HH-VV|) correspond to high DOLP from double bounce effect. A novel information fusion technique is applied to combine information from the two modes. In this research, it is demonstrated that the Degree of Linear Polarization (DOLP) from passive visible light polarimetric imaging can be used for separation of the classes in terms of scattering mechanisms from the PolSAR data. The separation of these three classes in terms of the scattering mechanisms has its application in the area of land cover classification and anomaly detection. The fusion of information from these particular two modes of imaging, i.e. PolSAR and passive visible light polarimetric imaging, is a largely unexplored area in remote sensing and the main challenge in this research is to identify areas and scenarios where information fusion between the two modes is advantageous for separation of the classes in terms of scattering mechanisms relative to separation achieved with only PolSAR
    • 

    corecore