106 research outputs found

    Deep learning-based change detection in remote sensing images:a review

    Get PDF
    Images gathered from different satellites are vastly available these days due to the fast development of remote sensing (RS) technology. These images significantly enhance the data sources of change detection (CD). CD is a technique of recognizing the dissimilarities in the images acquired at distinct intervals and are used for numerous applications, such as urban area development, disaster management, land cover object identification, etc. In recent years, deep learning (DL) techniques have been used tremendously in change detection processes, where it has achieved great success because of their practical applications. Some researchers have even claimed that DL approaches outperform traditional approaches and enhance change detection accuracy. Therefore, this review focuses on deep learning techniques, such as supervised, unsupervised, and semi-supervised for different change detection datasets, such as SAR, multispectral, hyperspectral, VHR, and heterogeneous images, and their advantages and disadvantages will be highlighted. In the end, some significant challenges are discussed to understand the context of improvements in change detection datasets and deep learning models. Overall, this review will be beneficial for the future development of CD methods

    Advanced Pre-Processing and Change-Detection Techniques for the Analysis of Multitemporal VHR Remote Sensing Images

    Get PDF
    Remote sensing images regularly acquired by satellite over the same geographical areas (multitemporal images) provide very important information on the land cover dynamic. In the last years the ever increasing availability of multitemporal very high geometrical resolution (VHR) remote sensing images (which have sub-metric resolution) resulted in new potentially relevant applications related to environmental monitoring and land cover control and management. The most of these applications are associated with the analysis of dynamic phenomena (both anthropic and non anthropic) that occur at different scales and result in changes on the Earth surface. In this context, in order to adequately exploit the huge amount of data acquired by remote sensing satellites, it is mandatory to develop unsupervised and automatic techniques for an efficient and effective analysis of such kind of multitemporal data. In the literature several techniques have been developed for the automatic analysis of multitemporal medium/high resolution data. However these techniques do not result effective when dealing with VHR images. The main reasons consist in their inability both to exploit the high geometrical detail content of VHR data and to model the multiscale nature of the scene (and therefore of possible changes). In this framework it is important to develop unsupervised change-detection(CD) methods able to automatically manage the large amount of information of VHR data, without the need of any prior information on the area under investigation. Even if these methods usually identify only the presence/absence of changes without giving information about the kind of change occurred, they are considered the most interesting from an operational perspective, as in the most of the applications no multitemporal ground truth information is available. Considering the above mentioned limitations, in this thesis we study the main problems related to multitemporal VHR images with particular attention to registration noise (i.e. the noise related to a non-perfect alignment of the multitemporal images under investigation). Then, on the basis of the results of the conducted analysis, we develop robust unsupervised and automatic change-detection methods. In particular, the following specific issues are addressed in this work: 1. Analysis of the effects of registration noise in multitemporal VHR images and definition of a method for the estimation of the distribution of such kind of noise useful for defining: a. Change-detection techniques robust to registration noise (RN); the proposed techniques are able to significantly reduce the false alarm rate due to RN that is raised by the standard CD techniques when dealing with VHR images. b. Effective registration methods; the proposed strategies are based on a multiscale analysis of the scene which allows one to extract accurate control points for the registration of VHR images. 2. Detection and discrimination of multiple changes in multitemporal images; this techniques allow one to overcome the limitation of the existing unsupervised techniques, as they are able to identify and separate different kinds of change without any prior information on the study areas. 3. Pre-processing techniques for optimizing change detection on VHR images; in particular, in this context we evaluate the impact of: a. Image transformation techniques on the results of the CD process; b. Different strategies of image pansharpening applied to the original multitemporal images on the results of the CD process. For each of the above mentioned topic an analysis of the state of the art is carried out, the limitations of existing methods are pointed out and the proposed solutions to the addressed problems are described in details. Finally, experimental results conducted on both simulated and real data are reported in order to show and confirm the validity of all the proposed methods

    Fusion of VNIR Optical and C-Band Polarimetric SAR Satellite Data for Accurate Detection of Temporal Changes in Vegetated Areas

    Get PDF
    In this paper, we propose a processing chain jointly employing Sentinel-1 and Sentinel-2 data, aiming to monitor changes in the status of the vegetation cover by integrating the four 10 m visible and near-infrared (VNIR) bands with the three red-edge (RE) bands of Sentinel-2. The latter approximately span the gap between red and NIR bands (700 nm–800 nm), with bandwidths of 15/20 nm and 20 m pixel spacing. The RE bands are sharpened to 10 m, following the hypersharpening protocol, which holds, unlike pansharpening, when the sharpening band is not unique. The resulting 10 m fusion product may be integrated with polarimetric features calculated from the Interferometric Wide (IW) Ground Range Detected (GRD) product of Sentinel-1, available at 10 m pixel spacing, before the fused data are analyzed for change detection. A key point of the proposed scheme is that the fusion of optical and synthetic aperture radar (SAR) data is accomplished at level of change, through modulation of the optical change feature, namely the difference in normalized area over (reflectance) curve (NAOC), calculated from the sharpened RE bands, by the polarimetric SAR change feature, achieved as the temporal ratio of polarimetric features, where the latter is the pixel ratio between the co-polar and the cross-polar channels. Hyper-sharpening of Sentinel-2 RE bands, calculation of NAOC and modulation-based integration of Sentinel-1 polarimetric change features are applied to multitemporal datasets acquired before and after a fire event, over Mount Serra, in Italy. The optical change feature captures variations in the content of chlorophyll. The polarimetric SAR temporal change feature describes depolarization effects and changes in volumetric scattering of canopies. Their fusion shows an increased ability to highlight changes in vegetation status. In a performance comparison achieved by means of receiver operating characteristic (ROC) curves, the proposed change feature-based fusion approach surpasses a traditional area-based approach and the normalized burned ratio (NBR) index, which is widespread in the detection of burnt vegetation

    Multisource and Multitemporal Data Fusion in Remote Sensing

    Get PDF
    The sharp and recent increase in the availability of data captured by different sensors combined with their considerably heterogeneous natures poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary datasets, however, opens up the possibility of utilizing multimodal datasets in a joint manner to further improve the performance of the processing approaches with respect to the application at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several spaceborne sensors, the integration of the temporal information with the spatial and/or spectral/backscattering information of the remotely sensed data is possible and helps to move from a representation of 2D/3D data to 4D data structures, where the time variable adds new information as well as challenges for the information extraction algorithms. There are a huge number of research works dedicated to multisource and multitemporal data fusion, but the methods for the fusion of different modalities have expanded in different paths according to each research community. This paper brings together the advances of multisource and multitemporal data fusion approaches with respect to different research communities and provides a thorough and discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to conduct novel investigations on this challenging topic by supplying sufficient detail and references

    Multi-Fusion algorithms for Detecting Land Surface Pattern Changes Using Multi-High Spatial Resolution Images and Remote Sensing Analysis

    Get PDF
    Producing accurate Land-Use and Land-Cover (LU/LC) maps using low-spatial-resolution images is a difficult task. Pan-sharpening is crucial for estimating LU/LC patterns. This study aimed to identify the most precise procedure for estimating LU/LC by adopting two fusion approaches, namely Color Normalized Brovey (BM) and Gram-Schmidt Spectral Sharpening (GS), on high-spatial-resolution Multi-sensor and Multi-spectral images, such as (1) the Unmanned Aerial Vehicle (UAV) system, (2) the WorldView-2 satellite system, and (3) low-spatial-resolution images like the Sentinel-2 satellite, to generate six levels of fused images with the three original multi-spectral images. The Maximum Likelihood method (ML) was used for classifying all nine images. A confusion matrix was used to evaluate the accuracy of each single classified image. The obtained results were statistically compared to determine the most reliable, accurate, and appropriate LU/LC map and procedure. It was found that applying GS to the fused image, which integrated WorldView-2 and Sentinel-2 satellite images and was classified by the ML method, produced the most accurate results. This procedure has an overall accuracy of 88.47% and a kappa coefficient of 0.85. However, the overall accuracies of the three classified multispectral images range between 86.84% to 76.49%. Furthermore, the accuracy assessment of the fused images by the Brovey method and the rest of the GS method and classified by the ML method ranges between 85.75% to 76.68%. This proposed procedure shows a lot of promise in the academic sphere for mapping LU/LC. Previous researchers have mostly used satellite images or datasets with similar spatial and spectral resolution, at least for tropical areas like the study area of this research, to detect land surface patterns. However, no one has previously investigated and examined the use and application of different datasets that have different spectral and spatial resolutions and their accuracy for mapping LU/LC. This study has successfully adopted different datasets provided by different sensors with varying spectral and spatial levels to investigate this. Doi: 10.28991/ESJ-2023-07-04-013 Full Text: PD

    Novel pattern recognition methods for classification and detection in remote sensing and power generation applications

    Get PDF
    Novel pattern recognition methods for classification and detection in remote sensing and power generation application

    Multisource and multitemporal data fusion in remote sensing:A comprehensive review of the state of the art

    Get PDF
    The recent, sharp increase in the availability of data captured by different sensors, combined with their considerable heterogeneity, poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary data sets, however, opens up the possibility of utilizing multimodal data sets in a joint manner to further improve the performance of the processing approaches with respect to applications at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several

    Supervised / unsupervised change detection

    Get PDF
    The aim of this deliverable is to provide an overview of the state of the art in change detection techniques and a critique of what could be programmed to derive SENSUM products. It is the product of the collaboration between UCAM and EUCENTRE. The document includes as a necessary requirement a discussion about a proposed technique for co-registration. Since change detection techniques require an assessment of a series of images and the basic process involves comparing and contrasting the similarities and differences to essentially spot changes, co-registration is the first step. This ensures that the user is comparing like for like. The developed programs would then be used on remotely sensed images for applications in vulnerability assessment and post-disaster recovery assessment and monitoring. One key criterion is to develop semi-automated and automated techniques. A series of available techniques are presented along with the advantages and disadvantages of each method. The descriptions of the implemented methods are included in the deliverable D2.7 ”Software Package SW2.3”. In reviewing the available change detection techniques, the focus was on ways to exploit medium resolution imagery such as Landsat due to its free-to-use license and since there is a rich historical coverage arising from this satellite series. Regarding the change detection techniques with high resolution images, this was also examined and a recovery specific change detection index is discussed in the report
    corecore