28 research outputs found

    Building change detection by W-shape resunet++ network with triple attention mechanism

    Get PDF
    Building change detection in high resolution remote sensing images is one of the most important and applied topics in urban management and urban planning. Different environmental illumination conditions and registration problem are the most error resource in the bitemporal images that will cause pseudochanges in results. On the other hand, the use of deep learning technologies especially convolutional neural networks (CNNs) has been successful and considered, but usually causes the loss of shape and detail at the edges. Accordingly, we propose a W-shape ResUnet++ network in which images with different environmental conditions enter the network independently. ResUnet++ is a network with residual blocks, triple attention blocks and Atrous Spatial Pyramidal Pooling. ResUnet++ is used on both sides of the network to extract deeper and discriminator features. This improves the channel and spatial inter-dependencies, while at the same time reducing the computational cost. After that, the Euclidean distance between the features is computed and the deconvolution is done. Also, a dual loss function is designed that used the weighted binary cross entropy to solve the unbalance between the changed and unchanged data in change detection training data and in the second part, we used the mask–boundary consistency constraints that the condition of converging the edges of the training data and the predicted edge in the loss function has been added. We implemented the proposed method on two remote sensing datasets and then compared the results with state-of-the-art methods. The F1 score improved 1.52 % and 4.22 % by using the proposed model in the first and second dataset, respectively

    Performance analysis of change detection techniques for land use land cover

    Get PDF
    Remotely sensed satellite images have become essential to observe the spatial and temporal changes occurring due to either natural phenomenon or man-induced changes on the earth’s surface. Real time monitoring of this data provides useful information related to changes in extent of urbanization, environmental changes, water bodies, and forest. Through the use of remote sensing technology and geographic information system tools, it has become easier to monitor changes from past to present. In the present scenario, choosing a suitable change detection method plays a pivotal role in any remote sensing project. Previously, digital change detection was a tedious task. With the advent of machine learning techniques, it has become comparatively easier to detect changes in the digital images. The study gives a brief account of the main techniques of change detection related to land use land cover information. An effort is made to compare widely used change detection methods used to identify changes and discuss the need for development of enhanced change detection methods

    Optimized Deep Belief Neural Network for Semantic Change Detection in Multi-Temporal Image

    Get PDF
    Nowadays, a massive quantity of remote sensing images is utilized from tremendous earth observation platforms. For processing a wide range of remote sensing data to be transferred based on knowledge and information of them. Therefore, the necessity for providing the automated technologies to deal with multi-spectral image is done in terms of change detection. Multi-spectral images are associated with plenty of corrupted data like noise and illumination. In order to deal with such issues several techniques are utilized but they are not effective for sensitive noise and feature correlation may be missed. Several machine learning-based techniques are introduced to change detection but it is not effective for obtaining the relevant features. In other hand, the only limited datasets are available in open-source platform; therefore, the development of new proposed model is becoming difficult. In this work, an optimized deep belief neural network model is introduced based on semantic modification finding for multi-spectral images. Initially, input images with noise destruction and contrast normalization approaches are applied. Then to notice the semantic changes present in the image, the Semantic Change Detection Deep Belief Neural Network (SCD-DBN) is introduced. This research focusing on providing a change map based on balancing noise suppression and managing the edge of regions in an appropriate way. The new change detection method can automatically create features for different images and improve search results for changed regions. The projected technique shows a lower missed finding rate in the Semantic Change Detection dataset and a more ideal rate than other approaches

    Improving a Deep Learning Model to Accurately Diagnose LVNC

    Get PDF
    ©2023. This manuscript version is made available under the CC-BY 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/ This document is the Published Manuscript version of a Published Work that appeared in final form in Journal of Clinical Medicine. To access the final edited and published work see https://doi.org/10.3390/jcm12247633Accurate diagnosis of Left Ventricular Noncompaction Cardiomyopathy (LVNC) is critical for proper patient treatment but remains challenging. This work improves LVNC detection by improving left ventricle segmentation in cardiac MR images. Trabeculated left ventricle indicates LVNC, but automatic segmentation is difficult. We present techniques to improve segmentation and evaluate their impact on LVNC diagnosis. Three main methods are introduced: (1) using full 800 × 800 MR images rather than 512 × 512; (2) a clustering algorithm to eliminate neural network hallucinations; (3) advanced network architectures including Attention U-Net, MSA-UNet, and U-Net++.Experiments utilize cardiac MR datasets from three different hospitals. U-Net++ achieves the best segmentation performance using 800 × 800 images, and it improves the mean segmentation Dice score by 0.02 over the baseline U-Net, the clustering algorithm improves the mean Dice score by 0.06 on the images it affected, and the U-Net++ provides an additional 0.02 mean Dice score over the baseline U-Net. For LVNC diagnosis, U-Net++ achieves 0.896 accuracy, 0.907 precision, and 0.912 F1-score outperforming the baseline U-Net. Proposed techniques enhance LVNC detection, but differences between hospitals reveal problems in improving generalization. This work provides validated methods for precise LVNC diagnosis

    A Deep Learning Approach to Mapping Irrigation: U-Net IrrMapper

    Get PDF
    Accurate maps of irrigation are essential for understanding and managing water resources in light of a warming climate. We present a new method for mapping irrigation and apply it to the state of Montana over the years 2000-2019. The method is based on an ensemble of convolutional neural networks that only rely on raw Landsat surface reflectance data. The ensemble of networks method learns to mask clouds and ignore Landsat 7 scan-line failures without supervision, reducing the need for preprocessing data or feature engineering. Unlike other approaches to mapping irrigation, the method doesn\u27t use other mapping products like the Cropland Data Layer or the National Land Cover Dataset, removing the biases inherent in using those products. We evaluate our method and compare it to existing maps of irrigation on novel spatially explicit ground truth data, finding that our method outperforms other methods of mapping irrigation in Montana in terms of overall accuracy and precision. We find that our method agrees better statewide with the USDA National Agricultural Statistics Survey estimates of irrigated area compared to other methods, and has far fewer errors of commission in rainfed agriculture areas. In addition, our method produces uncertainties for predictions of irrigated land, and we find that the neural networks have large uncertainty in some misclassified areas. The methodology has the potential to be applied across the entire United States and for the complete Landsat record

    Cross-modal change detection flood extraction based on self-supervised contrastive pre-training

    Get PDF
    Flood extraction is a critical issue in remote sensing analysis. Accurate flood extraction faces challenges such as complex scenes, image differences across modalities, and a shortage of labeled samples. Traditional supervised deep learning algorithms demonstrate promising prospects in flood extraction. They mostly rely on abundant labeled data. However, in practical applications, there is a scarcity of available labeled samples for flood change regions, leading to an expensive acquisition of such data for flood extraction. In contrast, there is a wealth of unlabeled data in remote sensing images. Self-supervised contrastive learning (SSCL) provides a solution, allowing learning from unlabeled data without explicit labels. Inspired by SSCL, we utilized the open-source CAU-Flood dataset and developed a framework for cross-modal change detection in flood extraction (CMCDFE). We employed the Barlow Twin (BT) SSCL algorithm to learn effective visual feature representations of flood change regions from unlabeled cross-modal bi-temporal remote sensing data. Subsequently, these well-initialized weight parameters were transferred to the task of flood extraction, achieving optimal accuracy. We introduced the improved CS-DeepLabV3+ network for extracting flood change regions from cross-modal bi-temporal remote sensing data, incorporating the CBAM dual attention mechanism. By demonstrating on the CAU-Flood dataset, we proved that fine-tuning with only a pre-trained encoder can surpass widely used ImageNet pre-training methods without additional data. This approach effectively addresses downstream cross-modal change detection flood extraction tasks

    SCDNET: A novel convolutional network for semantic change detection in high resolution optical remote sensing imagery

    Get PDF
    Abstract With the continuing improvement of remote-sensing (RS) sensors, it is crucial to monitor Earth surface changes at fine scale and in great detail. Thus, semantic change detection (SCD), which is capable of locating and identifying "from-to" change information simultaneously, is gaining growing attention in RS community. However, due to the limitation of large-scale SCD datasets, most existing SCD methods are focused on scene-level changes, where semantic change maps are generated with only coarse boundary or scarce category information. To address this issue, we propose a novel convolutional network for large-scale SCD (SCDNet). It is based on a Siamese UNet architecture, which consists of two encoders and two decoders with shared weights. First, multi-temporal images are given as input to the encoders to extract multi-scale deep representations. A multi-scale atrous convolution (MAC) unit is inserted at the end of the encoders to enlarge the receptive field as well as capturing multi-scale information. Then, difference feature maps are generated for each scale, which are combined with feature maps from the encoders to serve as inputs for the decoders. Attention mechanism and deep supervision strategy are further introduced to improve network performance. Finally, we utilize softmax layer to produce a semantic change map for each time image. Extensive experiments are carried out on two large-scale high-resolution SCD datasets, which demonstrates the effectiveness and superiority of the proposed method
    corecore