16,859 research outputs found

    Land Cover Change Detection Based on Adaptive Contextual Information Using Bi-Temporal Remote Sensing Images

    Get PDF
    Land cover change detection (LCCD) based on bi-temporal remote sensing images plays an important role in the inventory of land cover change. Due to the benefit of having spatial dependency properties within the image space while using remote sensing images for detecting land cover change, many contextual information-based change detection methods have been proposed in past decades. However, there is still a space for improvement in accuracies and usability of LCCD. In this paper, a LCCD method based on adaptive contextual information is proposed. First, an adaptive region is constructed by gradually detecting the spectral similarity surrounding a central pixel. Second, the Euclidean distance between pairwise extended regions is calculated to measure the change magnitude between the pairwise central pixels of bi-temporal images. All the bi-temporal images are scanned pixel by pixel so the change magnitude image (CMI) can be generated. Then, the Otsu or a manual threshold is employed to acquire the binary change detection map (BCDM). The detection accuracies of the proposed approach are investigated by three land cover change cases with Landsat bi-temporal remote sensing images and aerial images with very high spatial resolution (0.5 m/pixel). In comparison to several widely used change detection methods, the proposed approach can produce a land cover change inventory map with a competitive accuracyThis work was supported by the National Science Foundation China (61701396), the Natural Science Foundation of Shaan Xi Province (2017JQ4006), Engineering Research Center of Geospatial Information and Digital Technology, NASG (SIDT20171003), The National Key Research and Development Program of China(018YFF0215006), Natural Science Foundation of Jiangsu Province, China (BK20150835), and Tibet Natural Science Foundation-The study of Tibet crop condition monitoring based on crop growth model and multi-source remote sensing data (2016-ZR-15-18).Peer Reviewe

    An Adaptive Semi-Parametric and Context-Based Approach to Unsupervised Change Detection in Multitemporal Remote-Sensing Images

    Get PDF
    In this paper, a novel automatic approach to the unsupervised identification of changes in multitemporal remote-sensing images is proposed. This approach, unlike classical ones, is based on the formulation of the unsupervised change-detection problem in terms of the Bayesian decision theory. In this context, an adaptive semi-parametric technique for the unsupervised estimation of the statistical terms associated with the gray levels of changed and unchanged pixels in a difference image is presented. Such a technique exploits the effectivenesses of two theoretically well-founded estimation procedures: the reduced Parzen estimate (RPE) procedure and the expectation-maximization (EM) algorithm. Then, thanks to the resulting estimates and to a Markov Random Field (MRF) approach used to model the spatial-contextual information contained in the multitemporal images considered, a change detection map is generated. The adaptive semi-parametric nature of the proposed technique allows its application to different kinds of remote-sensing images. Experimental results, obtained on two sets of multitemporal remote-sensing images acquired by two different sensors, confirm the validity of the proposed approach

    A robust nonlinear scale space change detection approach for SAR images

    Get PDF
    In this paper, we propose a change detection approach based on nonlinear scale space analysis of change images for robust detection of various changes incurred by natural phenomena and/or human activities in Synthetic Aperture Radar (SAR) images using Maximally Stable Extremal Regions (MSERs). To achieve this, a variant of the log-ratio image of multitemporal images is calculated which is followed by Feature Preserving Despeckling (FPD) to generate nonlinear scale space images exhibiting different trade-offs in terms of speckle reduction and shape detail preservation. MSERs of each scale space image are found and then combined through a decision level fusion strategy, namely "selective scale fusion" (SSF), where contrast and boundary curvature of each MSER are considered. The performance of the proposed method is evaluated using real multitemporal high resolution TerraSAR-X images and synthetically generated multitemporal images composed of shapes with several orientations, sizes, and backscatter amplitude levels representing a variety of possible signatures of change. One of the main outcomes of this approach is that different objects having different sizes and levels of contrast with their surroundings appear as stable regions at different scale space images thus the fusion of results from scale space images yields a good overall performance
    corecore