3 research outputs found

    Nonlocal Multiscale Single Image Statistics From Sentinel-1 SAR Data for High Resolution Bitemporal Forest Wind Damage Detection

    Get PDF
    Change detection of synthetic aperture radar (SAR) data is a challenge for high-resolution applications. This study presents a new nonlocal averaging approach (STAl'SAR) to reduce the speckle of single Sentinel-1 SAR images and statistical parameters derived from the image. The similarity of SAR pixels is based on the statistics of 3 x 3 window as represented by the mean, standard deviation, median, minimum, and maximum. K-means clustering is used to divide the SAR image in 30 similarity clusters. The nonlocal averaging is carried out within each cluster separately in magnitude order of the 3 x 3 window averages. The nonlocal filtering is applicable not only to the original pixel backscattering values but also to statistical parameters, such as standard deviation. The statistical parameters to be filtered can represent any window size, according to the need of the application. The nonlocally averaged standard deviation derived in two spatial resolutions, 3 x 3 and 7 x 7 windows, are demonstrated here for improving the resolution in which the forest damages can be detected using the VH polarized backscattering spatial variation change.Peer reviewe

    AGSPNet: A framework for parcel-scale crop fine-grained semantic change detection from UAV high-resolution imagery with agricultural geographic scene constraints

    Full text link
    Real-time and accurate information on fine-grained changes in crop cultivation is of great significance for crop growth monitoring, yield prediction and agricultural structure adjustment. Aiming at the problems of serious spectral confusion in visible high-resolution unmanned aerial vehicle (UAV) images of different phases, interference of large complex background and salt-and-pepper noise by existing semantic change detection (SCD) algorithms, in order to effectively extract deep image features of crops and meet the demand of agricultural practical engineering applications, this paper designs and proposes an agricultural geographic scene and parcel-scale constrained SCD framework for crops (AGSPNet). AGSPNet framework contains three parts: agricultural geographic scene (AGS) division module, parcel edge extraction module and crop SCD module. Meanwhile, we produce and introduce an UAV image SCD dataset (CSCD) dedicated to agricultural monitoring, encompassing multiple semantic variation types of crops in complex geographical scene. We conduct comparative experiments and accuracy evaluations in two test areas of this dataset, and the results show that the crop SCD results of AGSPNet consistently outperform other deep learning SCD models in terms of quantity and quality, with the evaluation metrics F1-score, kappa, OA, and mIoU obtaining improvements of 0.038, 0.021, 0.011 and 0.062, respectively, on average over the sub-optimal method. The method proposed in this paper can clearly detect the fine-grained change information of crop types in complex scenes, which can provide scientific and technical support for smart agriculture monitoring and management, food policy formulation and food security assurance

    Unsupervised Change Detection Analysis in Satellite Image Time Series using Deep Learning Combined with Graph-Based Approaches

    No full text
    International audienceNowadays, huge volume of satellite images, via the different Earth Observation missions, are constantly acquired and they constitute a valuable source of information for the analysis of spatio-temporal phenomena. However, it can be challenging to obtain reference data associated to such images to deal with land use or land cover changes as often the nature of the phenomena under study is not known a priori. With the aim to deal with satellite image analysis, considering a real-world scenario where reference data cannot be available, in this paper, we present a novel end-to-end unsupervised approach for change detection and clustering for satellite image time series (SITS). In the proposed framework, we firstly create bi-temporal change masks for every couple of consecutive images using neural network autoencoders. Then, we associate the extracted changes to different spatial objects. The objects sharing the same geographical location are combined in spatio-temporal evolution graphs that are finally clustered accordingly to the type of change process with gated recurrent unit (GRU) autoencoder-based model. The proposed approach was assessed on two real-world SITS data supplying promising results
    corecore