845 research outputs found

    Water Across Synthetic Aperture Radar Data (WASARD): SAR Water Body Classification for the Open Data Cube

    Get PDF
    The detection of inland water bodies from Synthetic Aperture Radar (SAR) data provides a great advantage over water detection with optical data, since SAR imaging is not impeded by cloud cover. Traditional methods of detecting water from SAR data involves using thresholding methods that can be labor intensive and imprecise. This paper describes Water Across Synthetic Aperture Radar Data (WASARD): a method of water detection from SAR data which automates and simplifies the thresholding process using machine learning on training data created from Geoscience Australias WOFS algorithm. Of the machine learning models tested, the Linear Support Vector Machine was determined to be optimal, with the option of training using solely the VH polarization or a combination of the VH and VV polarizations. WASARD was able to identify water in the target area with a correlation of 97% with WOFS. Sentinel-1, Open Data Cube, Earth Observations, Machine Learning, Water Detection 1. INTRODUCTION Water classification is an important function of Earth imaging satellites, as accurate remote classification of land and water can assist in land use analysis, flood prediction, climate change research, as well as a variety of agricultural applications [2]. The ability to identify bodies of water remotely via satellite is immensely cheaper than contracting surveys of the areas in question, meaning that an application that can accurately use satellite data towards this function can make valuable information available to nations which would not be able to afford it otherwise. Highly reliable applications for the remote detection of water currently exist for use with optical satellite data such as that provided by LANDSAT. One such application, Geoscience Australias Water Observations from Space (WOFS) has already been ported for use with the Open Data Cube [6]. However, water detection using optical data from Landsat is constrained by its relatively long revisit cycle of 16 days [5], and water detection using any optical data is constrained in that it lacks the ability to make accurate classifications through cloud cover [2]. The alternative solution which solves these problems is water detection using SAR data, which images the Earth using cloud-penetrating microwaves. Because of its advantages over optical data, much research has been done into water detection using SAR data. Traditionally, this has been done using the thresholding method, which involves picking a polarization band and labeling all pixels for which this bands value is below a certain threshold as containing water. The thresholding method works since water tends to return a much lower backscatter value to the satellite than land [1]. However, this method can be flawed since estimating the proper threshold is often imprecise, complicated, and labor intensive for the end user. Thresholding also tends to use data from only one SAR polarization, when a combination of polarizations can provide insight into whether water is present. [2] In order to alleviate these problems, this paper presents an application for the Open Data Cube to detect water from SAR data using support vector machine (SVM) classification. 2. PLATFORM WASARD is an application for the Open Data Cube, a mechanism which provides a simple yet efficient means of ingesting, storing, and retrieving remote sensing data. Data can be ingested and made analysis ready according to whatever specifications the researcher chooses, and easily resampled to artificially alter a scenes resolution. Currently WASARD supports water detection on scenes from ESAs Sentinel-1 and JAXAs ALOS. When testing WASARD, Sentinel-1 was most commonly used due to its relatively high spatial resolution and its rapid 6 day revisit cycle [5]. With minor alterations to the application's code, however, it could support data from other satellites. 3. METHODOLOGY Using supervised classification, WASARD compares SAR data to a dataset pre-classified by WOFS in order to train an SVM classifier. This classifier is then used to detect water in other SAR scenes outside the training set. Accuracy was measured according to the following metrics: Precision: a measure of what percentage of the points WASARD labels as water are truly water Recall: a measure of what percentage of the total water cover WASARD was able to identify. F1 Score: a harmonic average of the precision and recall scores Both precision and recall are calculated at the end of the training phase, when the trained classifier is compared to a testing dataset. Because the WOFS algorithms classifications are used as the truth values when training a WASARD classifier, when precision and recall are mentioned in this paper, they are always with respect to the values produced by WOFS on a similar scene of Landsat data, which themselves have a classification accuracy of 97% [6]. Visual representations of water identified by WASARD in this paper were produced using the function wasard_plot(), which is included in WASARD. 3.1 Algorithm Selection The machine learning model used by WASARD is the Linear Support Vector Machine (SVM). This model uses a supervised learning algorithm to develop a classifier, meaning it creates a vector which can be multiplied by the vector formed by the relevant data bands to determine whether a pixel in a SAR scene contains water. This classifier is trained by comparing data points from selected bands in a SAR scene to their respective labels, which in this case are water or not water as given by the WOFS algorithm. The SVM was selected over the Random Forest model, which outperformed the SVM in training speed, but had a greater classification time and lower accuracy, and the Multilayer Perceptron Artificial Neural Network, which had a slightly higher average accuracy than the SVM, but much greater training and classification times. Figure 1: Visual representation of the SVM Classifier. Each white point represents a pixel in a SAR scene. In Figure 1, the diagonal line separating pixels determined to be water from those determined not to be water represents the actual classification vector produced by the SVM. It is worth noting that once the model has been trained, classification of pixels is done in a similar manner as in the thresholding method. This is especially true if only one band was used to train the model. 3.1 Feature Selection Sentinel-1 collects data from two bands: the Vertical/Vertical polarization (VV) and the Vertical/Horizontal polarization (VH). When 100 SVM classifiers were created for each polarization individually, and for the combination of the two, the following results were achieved: Figure 2: Accuracy of classifiers trained using different polarization bands. Precision and Recall were measured with respect to the values produced by WOFS. Figure 2 demonstrates that using both the VV and VH bands trades slightly lower recall for significantly greater precision when compared with the VH band alone, and that using the VV band alone is inferior in both metrics. WASARD therefore defaults to using both the VV and VH bands, and includes the option to use solely the VH band. The VV polarizations lower precision compared to the VH polarization is in contrast to results from previous research and may merit further analysis [4]. 3.2 Training a Classifier The steps in training a classifier with WASARD are 1. Selecting two scenes (one SAR, one optical) with the same spatial extents, and acquired close to each other in time, with a preference that the scenes are taken on the same day. 2. Using the WOFS algorithm to produce an array of the detected water in the scene of optical data, to be used as the labels during supervised learning 3. Data points from the selected bands from the SAR acquisition are bundled together into an array with the corresponding labels gathered from WOFS. A random sample with an equal number of points labeled Water and Not Water is selected to be partitioned into a training and a testing dataset 4. Using Scikit-Learns LinearSVC object, the training dataset is used to produce a classifier, which is then tested against the testing dataset to determine its precision and recall The result is a wasard_classifier object, which has the following attributes: 1. f1, recall, and precision: 3 metrics used to determine the classifiers accuracy 2. Coefficient: Vector which the SVM uses to make its predictions. The classifier detects water when the dot product of the coefficient and the vector formed by the SAR bands is positive 3. Save(): allows a user to save a classifier to the disk in order to use it without retraining 4. wasard_classify(): Classifies an entire xarray of SAR data using the SVM classifier All of the above steps are performed automatically when the user creates a wasard_classifier object. 3.3 Classifying a Dataset Once the classifier has been created, it can be used to detect water in an xarray of SAR data using wasard_classify(). By taking the dot product of the classifiers coefficients and the vector formed by the selected bands of SAR data, an array of predictions is constructed. A classifier can effectively be used on the same spatial extents as the ones where it was trained, or on any area with a similar landscape. Whil

    Guided patch-wise nonlocal SAR despeckling

    Full text link
    We propose a new method for SAR image despeckling which leverages information drawn from co-registered optical imagery. Filtering is performed by plain patch-wise nonlocal means, operating exclusively on SAR data. However, the filtering weights are computed by taking into account also the optical guide, which is much cleaner than the SAR data, and hence more discriminative. To avoid injecting optical-domain information into the filtered image, a SAR-domain statistical test is preliminarily performed to reject right away any risky predictor. Experiments on two SAR-optical datasets prove the proposed method to suppress very effectively the speckle, preserving structural details, and without introducing visible filtering artifacts. Overall, the proposed method compares favourably with all state-of-the-art despeckling filters, and also with our own previous optical-guided filter

    deSpeckNet: Generalizing Deep Learning Based SAR Image Despeckling

    Full text link
    Deep learning (DL) has proven to be a suitable approach for despeckling synthetic aperture radar (SAR) images. So far, most DL models are trained to reduce speckle that follows a particular distribution, either using simulated noise or a specific set of real SAR images, limiting the applicability of these methods for real SAR images with unknown noise statistics. In this paper, we present a DL method, deSpeckNet1, that estimates the speckle noise distribution and the despeckled image simultaneously. Since it does not depend on a specific noise model, deSpeckNet generalizes well across SAR acquisitions in a variety of landcover conditions. We evaluated the performance of deSpeckNet on single polarized Sentinel-1 images acquired in Indonesia, The Democratic Republic of Congo and The Netherlands, a single polarized ALOS-2/PALSAR-2 image acquired in Japan and an Iceye X2 image acquired in Germany. In all cases, deSpeckNet was able to effectively reduce speckle and restor

    Application of Multifractal Analysis to Segmentation of Water Bodies in Optical and Synthetic Aperture Radar Satellite Images

    Get PDF
    A method for segmenting water bodies in optical and synthetic aperture radar (SAR) satellite images is proposed. It makes use of the textural features of the different regions in the image for segmentation. The method consists in a multiscale analysis of the images, which allows us to study the images regularity both, locally and globally. As results of the analysis, coarse multifractal spectra of studied images and a group of images that associates each position (pixel) with its corresponding value of local regularity (or singularity) spectrum are obtained. Thresholds are then applied to the multifractal spectra of the images for the classification. These thresholds are selected after studying the characteristics of the spectra under the assumption that water bodies have larger local regularity than other soil types. Classifications obtained by the multifractal method are compared quantitatively with those obtained by neural networks trained to classify the pixels of the images in covered against uncovered by water. In optical images, the classifications are also compared with those derived using the so-called Normalized Differential Water Index (NDWI)

    An Evaluation of Sentinel-1 and Sentinel-2 for Land Cover Classification

    Get PDF
    This study evaluates Sentinel-1 and Sentinel-2 remotely sensed images for tropical land cover classification. The dual polarized Sentinel-1 VV and VH backscatter images and four 10-meter multispectral bands of Sentinel-2 were used to create six land cover classification images across two study areas along the border of the Bolivian Pando Department and the Brazilian state of Acre. Results indicate that Sentinel-2 multispectral bands possess a higher overall performance in delineating land cover types than the Sentinel-1 backscatter bands. Sentinel-1 backscatter bands delineated land cover types based on their surficial properties but did not facilitate the separation of similarly textured classes. The combination of Sentinel-1 and -2 resulted in higher accuracy for delineating land cover through increasing the accuracy in delineating the classes of secondary vegetation from exposed soil. While Sentinel-2 demonstrated the capability to consistently capture land cover in both case studies, there is potential for single date Sentinel-1 backscatter image to act as ancillary information in Sentinel-2 scenes affected by clouds or for increasing separability across classes of mixed multispectral qualities but distinct surficial roughness, such as bare ground versus sparsely vegetation areas

    Deep Learning Methods for Synthetic Aperture Radar Image Despeckling: An Overview of Trends and Perspectives

    Get PDF
    Synthetic aperture radar (SAR) images are affected by a spatially correlated and signal-dependent noise called speckle, which is very severe and may hinder image exploitation. Despeckling is an important task that aims to remove such noise so as to improve the accuracy of all downstream image processing tasks. The first despeckling methods date back to the 1970s, and several model-based algorithms have been developed in the years since. The field has received growing attention, sparked by the availability of powerful deep learning models that have yielded excellent performance for inverse problems in image processing. This article surveys the literature on deep learning methods applied to SAR despeckling, covering both supervised and the more recent self-supervised approaches. We provide a critical analysis of existing methods, with the objective of recognizing the most promising research lines; identify the factors that have limited the success of deep models; and propose ways forward in an attempt to fully exploit the potential of deep learning for SAR despeckling

    Towards a 20m global building map from Sentinel-1 SAR Data

    Get PDF
    This study introduces a technique for automatically mapping built-up areas using synthetic aperture radar (SAR) backscattering intensity and interferometric multi-temporal coherence generated from Sentinel-1 data in the framework of the Copernicus program. The underlying hypothesis is that, in SAR images, built-up areas exhibit very high backscattering values that are coherent in time. Several particular characteristics of the Sentinel-1 satellite mission are put to good use, such as its high revisit time, the availability of dual-polarized data, and its small orbital tube. The newly developed algorithm is based on an adaptive parametric thresholding that first identifies pixels with high backscattering values in both VV and VH polarimetric channels. The interferometric SAR coherence is then used to reduce false alarms. These are caused by land cover classes (other than buildings) that are characterized by high backscattering values that are not coherent in time (e.g., certain types of vegetated areas). The algorithm was tested on Sentinel-1 Interferometric Wide Swath data from five different test sites located in semiarid and arid regions in the Mediterranean region and Northern Africa. The resulting building maps were compared with the Global Urban Footprint (GUF) derived from the TerraSAR-X mission data and, on average, a 92% agreement was obtained.Peer ReviewedPostprint (published version

    Analyzing Explosive Volcanic Deposits From Satellite‐Based Radar Backscatter, Volcán de Fuego, 2018

    Get PDF
    Satellite radar backscatter has the potential to provide useful information about the progression of volcanic eruptions when optical, ground-based, or radar phase-based measurements are limited. However, backscatter changes are complex and challenging to interpret: explosive deposits produce different signals depending on pre-existing ground cover, radar parameters and eruption characteristics. We use high temporal- and spatial-resolution backscatter imagery to examine the emplacement and alteration of pyroclastic density currents (PDCs), lahar and ash deposits from the June 2018 eruption of Volcán de Fuego, Guatemala, using observatory reports and rainfall gauge data to ground truth our observations. We use a temporally dense time series of backscatter data to reduce noise and extract deposit areas. We observe backscatter changes in six drainages, the largest deposit was 11.9-km-long that altered an area of 6.3 urn:x-wiley:21699313:media:jgrb55183:jgrb55183-math-0001 and had a thickness of 10.5 urn:x-wiley:21699313:media:jgrb55183:jgrb55183-math-00022 m in the lower sections as estimated from radar shadows. The 3 June eruption also produced backscatter signal over an area of 40 urn:x-wiley:21699313:media:jgrb55183:jgrb55183-math-0003, consistent with reported ashfall. We use transient patterns in backscatter time series to identify nine periods of high lahar activity in a single drainage system between June and October 2018. We find that the characterisation of subtle backscatter signals associated with explosive eruptions are best observed with (1) radiometric terrain calibration, (2) speckle correction, and (3) consideration of pre-existing scattering properties. Our observations demonstrate that SAR backscatter can capture the emplacement and subsequent alteration of a range of explosive deposits, allowing the progression of an explosive eruption to be monitored

    Spatio-temporal and structural analysis of vegetation dynamics of Lowveld Savanna in South Africa

    Get PDF
    Savanna vegetation structure parameters are important for assessing the biomes status under various disturbance scenarios. Despite free availability remote sensing data, the use of optical remote sensing data for savanna vegetation structure mapping is limited by sparse and heterogeneous distribution of vegetation canopy. Cloud and aerosol contamination lead to inconsistency in the availability of time series data necessary for continuous vegetation monitoring, especially in the tropics. Long- and medium wavelength microwave data such as synthetic aperture radar (SAR), with their low sensitivity to clouds and atmospheric aerosols, and high temporal and spatial resolution solves these problems. Studies utilising remote sensing data for vegetation monitoring on the other hand, lack quality reference data. This study explores the potential of high-resolution TLS-derived vegetation structure variables as reference to multi-temporal SAR datasets in savanna vegetation monitoring. The overall objectives of this study are: (i) to evaluate the potential of high-resolution TLS-data in extraction of savanna vegetation structure variables; (ii) to estimate landscape-wide aboveground biomass (AGB) and assess changes over four years using multi-temporal L-band SAR within a Lowveld savanna in Kruger National Park; and (iii) to assess interactions between C-band SAR with various savanna vegetation structure variables. Field inventories and TLS campaign were carried out in the wet and dry seasons of 2015 respectively, and provided reference data upon which AGB, CC and cover classes were modelled. L-band SAR modelled AGB was used for change analysis over 4 years, while multitemporal C-band SAR data was used to assess backscatter response to seasonal changes in CC and AGB abundant classes and cover classes. From the AGB change analysis, on average 36 ha of the study area (91 ha) experienced a loss in AGB above 5 t/ha over 4 years. A high backscatter intensity is observed on high abundance AGB, CC classes and large trees as opposed to low CC and AGB abundance classes and small trees. There is high response to all structure variables, with C-band VV showing best polarization in savanna vegetation mapping. Moisture availability in the wet season increases backscatter response from both canopy and background classes
    corecore