23 research outputs found

    Semantic Approach in Image Change Detection

    Get PDF
    International audienceChange detection is a main issue in various domains, and especially for remote sensing purposes. Indeed, plethora of geospatial images are available and can be used to update geographical databases. In this paper, we propose a classification-based method to detect changes between a database and a more recent image. It is based both on an efficient training point selection and a hierarchical decision process. This allows to take into account the intrinsic heterogeneity of the objects and themes composing a database while limiting false detection rates. The reliability of the designed framework method is first assessed on simulated data, and then successfully applied on very high resolution satellite images and two land-cover databases

    An Unsupervised Algorithm for Change Detection in Hyperspectral Remote Sensing Data Using Synthetically Fused Images and Derivative Spectral Profiles

    Get PDF
    Multitemporal hyperspectral remote sensing data have the potential to detect altered areas on the earth’s surface. However, dissimilar radiometric and geometric properties between the multitemporal data due to the acquisition time or position of the sensors should be resolved to enable hyperspectral imagery for detecting changes in natural and human-impacted areas. In addition, data noise in the hyperspectral imagery spectrum decreases the change-detection accuracy when general change-detection algorithms are applied to hyperspectral images. To address these problems, we present an unsupervised change-detection algorithm based on statistical analyses of spectral profiles; the profiles are generated from a synthetic image fusion method for multitemporal hyperspectral images. This method aims to minimize the noise between the spectra corresponding to the locations of identical positions by increasing the change-detection rate and decreasing the false-alarm rate without reducing the dimensionality of the original hyperspectral data. Using a quantitative comparison of an actual dataset acquired by airborne hyperspectral sensors, we demonstrate that the proposed method provides superb change-detection results relative to the state-of-the-art unsupervised change-detection algorithms

    SCDNET: A novel convolutional network for semantic change detection in high resolution optical remote sensing imagery

    Get PDF
    Abstract With the continuing improvement of remote-sensing (RS) sensors, it is crucial to monitor Earth surface changes at fine scale and in great detail. Thus, semantic change detection (SCD), which is capable of locating and identifying "from-to" change information simultaneously, is gaining growing attention in RS community. However, due to the limitation of large-scale SCD datasets, most existing SCD methods are focused on scene-level changes, where semantic change maps are generated with only coarse boundary or scarce category information. To address this issue, we propose a novel convolutional network for large-scale SCD (SCDNet). It is based on a Siamese UNet architecture, which consists of two encoders and two decoders with shared weights. First, multi-temporal images are given as input to the encoders to extract multi-scale deep representations. A multi-scale atrous convolution (MAC) unit is inserted at the end of the encoders to enlarge the receptive field as well as capturing multi-scale information. Then, difference feature maps are generated for each scale, which are combined with feature maps from the encoders to serve as inputs for the decoders. Attention mechanism and deep supervision strategy are further introduced to improve network performance. Finally, we utilize softmax layer to produce a semantic change map for each time image. Extensive experiments are carried out on two large-scale high-resolution SCD datasets, which demonstrates the effectiveness and superiority of the proposed method

    Uncertainties of Human Perception in Visual Image Interpretation in Complex Urban Environments

    Get PDF
    Today satellite images are mostly exploited automatically due to advances in image classification methods. Manual visual image interpretation (MVII), however, still plays a significant role e.g., to generate training data for machine-learning algorithms or for validation purposes. In certain urban environments, however, of e.g., highest densities and structural complexity, textural and spectral complications in overlapping roof-structures still demand the human interpreter if one aims to capture individual building structures. The cognitive perception and real-world experience are still inevitable. Against these backgrounds, this article aims at quantifying and interpreting the uncertainties of mapping rooftop footprints of such areas. We focus on the agreement among interpreters and which aspects of perception and elements of image interpretation affect mapping. Ten test persons digitized six complex built-up areas. Hereby, we receive quantitative information about spatial variables of buildings to systematically check the consistency and congruence of results. An additional questionnaire reveals qualitative information about obstacles. Generally, we find large differences among interpreters’ mapping results and a high consistency of results for the same interpreter. We measure rising deviations correlate with a rising morphologic complexity. High degrees of individuality are expressed e.g., in time consumption, insitu-or geographic information system (GIS)-precognition whereas data source mostly influences the mapping procedure. By this study, we aim to fill a gap as prior research using MVII often does not implement an uncertainty analysis or quantify mapping aberrations. We conclude that remote sensing studies should not only rely unquestioned on MVII for validation; furthermore, data and methods are needed to suspend uncertainty
    corecore