24 research outputs found

    Paired Vegetation and Soil Burn Severity Metrics and Associated Climate, Weather, Topographical, and Land Cover Attributes

    No full text
    <p>This dataset pairs differenced Normalized Burn Ratio (dNBR) and soil burn severity (SBS) for 254 large (>400 ha in size) fires across the western US. Dataset also includes climate, weather, topography, physical and chemical soil characteristics, and land cover attributes of each burned pixel at the time of fire. This effort provided a table of 16.3 million burned pixels and their associated characteristics including dNBR, SBS, and 94 biological and physical covariates. After removing correlated features, the final data includes 18 fire covariates namely: dNBR, elevation, slope, aspect, land cover type, wind speed, energy release component, vapor pressure deficit, annual precipitation, and annual average daily max temperature, as well as the clay, sand and silt content of the soil and volumetric fraction of coarse fragments and soil organic carbon content. We also included spatial coherence metrices for dNBR, including DVAR, SHADE and SAVG. This data is provided as CSV files in Xtrain, Xvalidation, Xtest, as well as Ytrain, Yvalidation, and Ytest; in which X files (model input) provide all features except for SBS and Y files (model output) include SBS.</p><p>We also provided this data for an additional 16 large fires across the western US ("Extra Test" folder, including Dataset – X file – and Label – Y file).</p><p>Finally, the trained XGBoost model to translate dNBR to SBS using the associated features is also provided in this folder.</p><p><strong>Data Sources:</strong></p><p>Burn severity: https://burnseverity.cr.usgs.gov/products/baer  </p><p>Weather and climate: http://thredds.northwestknowledge.net:8080/thredds/catalog/MET/climatologies/dailyClimatologies_1981_2010/catalog.html</p><p>https://developers.google.com/earth-engine/datasets/catalog/IDAHO_EPSCOR_GRIDMET</p><p>Land cover: https://developers.google.com/earth-engine/datasets/catalog/USGS_NLCD_RELEASES_2020_REL_NALCMS</p><p>Soil: https://gee-community-catalog.org/projects/isric/</p><p>Topography:https://developers.google.com/earth-engine/datasets/catalog/USGS_3DEP_10m </p&gt

    A New End-to-End Multi-Dimensional CNN Framework for Land Cover/Land Use Change Detection in Multi-Source Remote Sensing Datasets

    No full text
    The diversity of change detection (CD) methods and the limitations in generalizing these techniques using different types of remote sensing datasets over various study areas have been a challenge for CD applications. Additionally, most CD methods have been implemented in two intensive and time-consuming steps: (a) predicting change areas, and (b) decision on predicted areas. In this study, a novel CD framework based on the convolutional neural network (CNN) is proposed to not only address the aforementioned problems but also to considerably improve the level of accuracy. The proposed CNN-based CD network contains three parallel channels: the first and second channels, respectively, extract deep features on the original first- and second-time imagery and the third channel focuses on the extraction of change deep features based on differencing and staking deep features. Additionally, each channel includes three types of convolution kernels: 1D-, 2D-, and 3D-dilated-convolution. The effectiveness and reliability of the proposed CD method are evaluated using three different types of remote sensing benchmark datasets (i.e., multispectral, hyperspectral, and Polarimetric Synthetic Aperture RADAR (PolSAR)). The results of the CD maps are also evaluated both visually and statistically by calculating nine different accuracy indices. Moreover, the results of the CD using the proposed method are compared to those of several state-of-the-art CD algorithms. All the results prove that the proposed method outperforms the other remote sensing CD techniques. For instance, considering different scenarios, the Overall Accuracies (OAs) and Kappa Coefficients (KCs) of the proposed CD method are better than 95.89% and 0.805, respectively, and the Miss Detection (MD) and the False Alarm (FA) rates are lower than 12% and 3%, respectively

    A new land-cover match-based change detection for hyperspectral imagery

    No full text
    The presence of phenomena such as earthquakes, floods and artificial human activities causes changes on the Earth’s surface. Change detection (CD) is an essential tool for the monitoring and managing of resources on local and global scales. Hyperspectral imagery can provide more detailed results for detecting changes in land-cover types. The main objective of this paper is to present a new, supervised CD method by combining similarity-based and distance-based methods to increase the efficiency of already existing CD approaches. The proposed method applies in two phases and uses three different algorithms, including image differencing, modified Z-score analysis and spectral angle mapper. The efficiency of the presented method is evaluated using Hyperion multi-temporal hyperspectral imagery. The receiver-operating characteristic curve index is used for assessing and comparing the results. The results clearly demonstrate the superiority of the proposed method for the detection and production of more accurate change maps. Furthermore, the proposed method is also able to detect changes with an accuracy of more than 96%, a false alarm rate lower than 0.03 and an area under the curve of about 0.986 in overall comparison to other conventional CD techniques. In addition, this method achieved an optimal threshold value with more rapid convergence

    Improved Burned Area Mapping Using Monotemporal Landsat-9 Imagery and Convolutional Shift-Transformer

    No full text
    Satellite imagery, specifically Landsat, have been widely used for mapping and monitoring wildfire burned areas. The new Landsat-9 satellite – with higher radiometric resolution compared to its predecessors, and improved temporal resolution when combined with Landsat-8 (∼8 days) – enables a wide range of applications, particularly burned area mapping (BAM). We propose a novel deep learning BAM model that leverages the strengths of the convolutional layers for deep feature generation from Landsat-9 imagery and shift-transformer block for burned area classification. The performance of the model is evaluated in five large fire case studies across the globe. BAM results are also compared with two state-of-the-art models, namely residual convolutional neural network and vision transformer. The proposed convolutional shift-transformer (CST) outperforms other models with an F1-score of greater than 96% across the case studies. Furthermore, CST only requires a single post-fire image that reduces computational costs compared to traditional models that use bi-temporal images

    A Dual Attention Convolutional Neural Network for Crop Classification Using Time-Series Sentinel-2 Imagery

    No full text
    Accurate and timely mapping of crop types and having reliable information about the cultivation pattern/area play a key role in various applications, including food security and sustainable agriculture management. Remote sensing (RS) has extensively been employed for crop type classification. However, accurate mapping of crop types and extents is still a challenge, especially using traditional machine learning methods. Therefore, in this study, a novel framework based on a deep convolutional neural network (CNN) and a dual attention module (DAM) and using Sentinel-2 time-series datasets was proposed to classify crops. A new DAM was implemented to extract informative deep features by taking advantage of both spectral and spatial characteristics of Sentinel-2 datasets. The spectral and spatial attention modules (AMs) were respectively applied to investigate the behavior of crops during the growing season and their neighborhood properties (e.g., textural characteristics and spatial relation to surrounding crops). The proposed network contained two streams: (1) convolution blocks for deep feature extraction and (2) several DAMs, which were employed after each convolution block. The first stream included three multi-scale residual convolution blocks, where the spectral attention blocks were mainly applied to extract deep spectral features. The second stream was built using four multi-scale convolution blocks with a spatial AM. In this study, over 200,000 samples from six different crop types (i.e., alfalfa, broad bean, wheat, barley, canola, and garden) and three non-crop classes (i.e., built-up, barren, and water) were collected to train and validate the proposed framework. The results demonstrated that the proposed method achieved high overall accuracy and a Kappa coefficient of 98.54% and 0.981, respectively. It also outperformed other state-of-the-art classification methods, including RF, XGBOOST, R-CNN, 2D-CNN, 3D-CNN, and CBAM, indicating its high potential to discriminate different crop types

    DSMNN-Net: A Deep Siamese Morphological Neural Network Model for Burned Area Mapping Using Multispectral Sentinel-2 and Hyperspectral PRISMA Images

    No full text
    International audienceWildfires are one of the most destructive natural disasters that can affect our environment, with significant effects also on wildlife. Recently, climate change and human activities have resulted in higher frequencies of wildfires throughout the world. Timely and accurate detection of the burned areas can help to make decisions for their management. Remote sensing satellite imagery can have a key role in mapping burned areas due to its wide coverage, high-resolution data collection, and low capture times. However, although many studies have reported on burned area mapping based on remote sensing imagery in recent decades, accurate burned area mapping remains a major challenge due to the complexity of the background and the diversity of the burned areas. This paper presents a novel framework for burned area mapping based on Deep Siamese Morphological Neural Network (DSMNN-Net) and heterogeneous datasets. The DSMNN-Net framework is based on change detection through proposing a pre/post-fire method that is compatible with heterogeneous remote sensing datasets. The proposed network combines multiscale convolution layers and morphological layers (erosion and dilation) to generate deep features. To evaluate the performance of the method proposed here, two case study areas in Australian forests were selected. The framework used can better detect burned areas compared to other state-of-the-art burned area mapping procedures, with a performance of >98% for overall accuracy index, and a kappa coefficient of >0.9, using multispectral Sentinel-2 and hyperspectral PRISMA image datasets. The analyses of the two datasets illustrate that the DSMNN-Net is sufficiently valid and robust for burned area mapping, and especially for complex areas

    A Sub-Pixel Multiple Change Detection Approach for Hyperspectral Imagery

    No full text
    One of the most important applications of remote sensing is change detection (CD). The accurate detection of changes is of great significance for the optimal management of available resources. This article presents an unsupervised ‘multiple-change detection’ method using multi-temporal hyperspectral imaging based on the integration of an unmixing technique, multi-resolution segmentation, similarity measure methods, and the Otsu algorithm. The proposed method is presented in the context of two main scenarios: the first scenario is hyperspectral change detection (HSCD) at the sub-pixel level with no ancillary data; and the second is the HSCD at the sub-pixel level based on ancillary data (high-resolution panchromatic (PAN) data). The main advantages of the proposed method are that it is unsupervised, easy to use, providing ‘multiple-change’ maps at the sub-pixel level with high accuracy. To evaluate the performance of the proposed method, real bi-temporal hyperspectral Hyperion images and high spatial resolution of PAN data belonging to the Advanced Land Imager (ALI) sensor with a variety of land cover classes were used. The results show the overall accuracy improved by >5%, and kappa coefficient improved by >0.13 with respect to the results obtained from their original resolution

    A Multi-Dimensional Deep Siamese Network for Land Cover Change Detection in Bi-Temporal Hyperspectral Imagery

    No full text
    In this study, an automatic Change Detection (CD) framework based on a multi-dimensional deep Siamese network was proposed for CD in bi-temporal hyperspectral imagery. The proposed method has two main steps: (1) automatic generation of training samples using the Otsu algorithm and the Dynamic Time Wrapping (DTW) predictor, and (2) binary CD using a multidimensional multi-dimensional Convolution Neural Network (CNN). Two bi-temporal hyperspectral datasets of the Hyperion sensor with a variety of land cover classes were used to evaluate the performance of the proposed method. The results were also compared to reference data and two state-of-the-art hyperspectral change detection (HCD) algorithms. It was observed that the proposed method relatively had higher accuracy and lower False Alarm (FA) rate, where the average Overall Accuracy (OA) and Kappa Coefficient (KC) were more than 96% and 0.90, respectively, and the average FA rate was lower than 5%

    BDD-Net+: A Building Damage Detection Framework Based on Modified Coat-Net

    No full text
    The accurate and fast assessment of damaged buildings following a disaster is critical for planning rescue and reconstruction efforts. The damage assessment by the traditional methods is time-consuming and with limited performance. In this article, we propose an end-to-end deep-learning network named building damage detection network-plus (BDD-Net+). The BDD-Net+ is based on a combination of convolution layers and transformer blocks. The proposed framework takes the advantage of the multiscale residual convolution blocks and self-attention layers. The proposed framework consists of four main steps: data preparation, model training, damage map generation and evaluation, and the use of an explainable artificial intelligence (XAI) framework for understanding and interpretation of the operation model. The experimental results include two representative real-world benchmark datasets (i.e., the Haiti earthquake and the Bata explosion). The obtained results illustrate that BDD-Net+ achieves excellent efficacy in comparison with other state-of-the-art methods. Furthermore, the visualization of the results by XAI shows that BDD-Net+ provides more interpretable and explainable results for damage detection than the other studied methods

    TCD-Net: A Novel Deep Learning Framework for Fully Polarimetric Change Detection Using Transfer Learning

    No full text
    Due to anthropogenic and natural activities, the land surface continuously changes over time. The accurate and timely detection of changes is greatly important for environmental monitoring, resource management and planning activities. In this study, a novel deep learning-based change detection algorithm is proposed for bi-temporal polarimetric synthetic aperture radar (PolSAR) imagery using a transfer learning (TL) method. In particular, this method has been designed to automatically extract changes by applying three main steps as follows: (1) pre-processing, (2) parallel pseudo-label training sample generation based on a pre-trained model and fuzzy c-means (FCM) clustering algorithm, and (3) classification. Moreover, a new end-to-end three-channel deep neural network, called TCD-Net, has been introduced in this study. TCD-Net can learn more strong and abstract representations for the spatial information of a certain pixel. In addition, by adding an adaptive multi-scale shallow block and an adaptive multi-scale residual block to the TCD-Net architecture, this model with much lower parameters is sensitive to objects of various sizes. Experimental results on two Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) bi-temporal datasets demonstrated the effectiveness of the proposed algorithm compared to other well-known methods with an overall accuracy of 96.71% and a kappa coefficient of 0.82
    corecore