766 research outputs found

    Fusion of VNIR Optical and C-Band Polarimetric SAR Satellite Data for Accurate Detection of Temporal Changes in Vegetated Areas

    Get PDF
    In this paper, we propose a processing chain jointly employing Sentinel-1 and Sentinel-2 data, aiming to monitor changes in the status of the vegetation cover by integrating the four 10 m visible and near-infrared (VNIR) bands with the three red-edge (RE) bands of Sentinel-2. The latter approximately span the gap between red and NIR bands (700 nm–800 nm), with bandwidths of 15/20 nm and 20 m pixel spacing. The RE bands are sharpened to 10 m, following the hypersharpening protocol, which holds, unlike pansharpening, when the sharpening band is not unique. The resulting 10 m fusion product may be integrated with polarimetric features calculated from the Interferometric Wide (IW) Ground Range Detected (GRD) product of Sentinel-1, available at 10 m pixel spacing, before the fused data are analyzed for change detection. A key point of the proposed scheme is that the fusion of optical and synthetic aperture radar (SAR) data is accomplished at level of change, through modulation of the optical change feature, namely the difference in normalized area over (reflectance) curve (NAOC), calculated from the sharpened RE bands, by the polarimetric SAR change feature, achieved as the temporal ratio of polarimetric features, where the latter is the pixel ratio between the co-polar and the cross-polar channels. Hyper-sharpening of Sentinel-2 RE bands, calculation of NAOC and modulation-based integration of Sentinel-1 polarimetric change features are applied to multitemporal datasets acquired before and after a fire event, over Mount Serra, in Italy. The optical change feature captures variations in the content of chlorophyll. The polarimetric SAR temporal change feature describes depolarization effects and changes in volumetric scattering of canopies. Their fusion shows an increased ability to highlight changes in vegetation status. In a performance comparison achieved by means of receiver operating characteristic (ROC) curves, the proposed change feature-based fusion approach surpasses a traditional area-based approach and the normalized burned ratio (NBR) index, which is widespread in the detection of burnt vegetation

    Improved POLSAR Image Classification by the Use of Multi-Feature Combination

    Get PDF
    Polarimetric SAR (POLSAR) provides a rich set of information about objects on land surfaces. However, not all information works on land surface classification. This study proposes a new, integrated algorithm for optimal urban classification using POLSAR data. Both polarimetric decomposition and time-frequency (TF) decomposition were used to mine the hidden information of objects in POLSAR data, which was then applied in the C5.0 decision tree algorithm for optimal feature selection and classification. Using a NASA/JPL AIRSAR POLSAR scene as an example, the overall accuracy and kappa coefficient of the proposed method reached 91.17% and 0.90 in the L-band, much higher than those achieved by the commonly applied Wishart supervised classification that were 45.65% and 0.41. Meantime, the overall accuracy of the proposed method performed well in both C- and P-bands. Polarimetric decomposition and TF decomposition all proved useful in the process. TF information played a great role in delineation between urban/built-up areas and vegetation. Three polarimetric features (entropy, Shannon entropy, T11 Coherency Matrix element) and one TF feature (HH intensity of coherence) were found most helpful in urban areas classification. This study indicates that the integrated use of polarimetric decomposition and TF decomposition of POLSAR data may provide improved feature extraction in heterogeneous urban areas

    Classification of Polarimetric SAR Images Using Compact Convolutional Neural Networks

    Get PDF
    Classification of polarimetric synthetic aperture radar (PolSAR) images is an active research area with a major role in environmental applications. The traditional Machine Learning (ML) methods proposed in this domain generally focus on utilizing highly discriminative features to improve the classification performance, but this task is complicated by the well-known "curse of dimensionality" phenomena. Other approaches based on deep Convolutional Neural Networks (CNNs) have certain limitations and drawbacks, such as high computational complexity, an unfeasibly large training set with ground-truth labels, and special hardware requirements. In this work, to address the limitations of traditional ML and deep CNN based methods, a novel and systematic classification framework is proposed for the classification of PolSAR images, based on a compact and adaptive implementation of CNNs using a sliding-window classification approach. The proposed approach has three advantages. First, there is no requirement for an extensive feature extraction process. Second, it is computationally efficient due to utilized compact configurations. In particular, the proposed compact and adaptive CNN model is designed to achieve the maximum classification accuracy with minimum training and computational complexity. This is of considerable importance considering the high costs involved in labelling in PolSAR classification. Finally, the proposed approach can perform classification using smaller window sizes than deep CNNs. Experimental evaluations have been performed over the most commonly-used four benchmark PolSAR images: AIRSAR L-Band and RADARSAT-2 C-Band data of San Francisco Bay and Flevoland areas. Accordingly, the best obtained overall accuracies range between 92.33 - 99.39% for these benchmark study sites

    A Content Based Region Separation and Analysis Approach for SAR Image Classification

    Get PDF
    SAR images are the images captured through satellite or radar to monitor the specific geographical area or to extract any information regarding the geographical structure. This information can be used to recognize the land areas or regions with specific features such as identification of water area or flood area etc. But the images captured from satellite covers larger land regions with multiple scene pictures. To recognize the specific land area, it is required to process all the images with defined constraints to identify the particular region. The images or the image features can be trained under some classification method to categorize the land regions. There are various supervised and unsupervised classification methods to classify the SAR images. But the SAR images are high resolution images with multiple region types in same images. Because of this, the existing methods are not fully capable to classify the regions accurately. There is the requirement of more effective classification that can identify the land regions more adaptively

    Integrating Incidence Angle Dependencies Into the Clustering-Based Segmentation of SAR Images

    Get PDF
    Synthetic aperture radar systems perform signal acquisition under varying incidence angles and register an implicit intensity decay from near to far range. Owing to the geometrical interaction between microwaves and the imaged targets, the rates at which intensities decay depend on the nature of the targets, thus rendering single-rate image correction approaches only partially successful. The decay, also known as the incidence angle effect, impacts the segmentation of wide-swath images performed on absolute intensity values. We propose to integrate the target-specific intensity decay rates into a nonstationary statistical model, for use in a fully automatic and unsupervised segmentation algorithm. We demonstrate this concept by assuming Gaussian distributed log-intensities and linear decay rates, a fitting approximation for the smooth systematic decay observed for extended flat targets. The segmentation is performed on Sentinel-1, Radarsat-2, and UAVSAR wide-swath scenes containing open water, sea ice, and oil slicks. As a result, we obtain segments connected throughout the entire incidence angle range, thus overcoming the limitations of modeling that does not account for different per-target decays. The model simplicity also allows for short execution times and presents the segmentation approach as a potential operational algorithm. In addition, we estimate the log-linear decay rates and examine their potential for a physical interpretation of the segments

    A markovian approach to unsupervised change detection with multiresolution and multimodality SAR data

    Get PDF
    In the framework of synthetic aperture radar (SAR) systems, current satellite missions make it possible to acquire images at very high and multiple spatial resolutions with short revisit times. This scenario conveys a remarkable potential in applications to, for instance, environmental monitoring and natural disaster recovery. In this context, data fusion and change detection methodologies play major roles. This paper proposes an unsupervised change detection algorithmfor the challenging case of multimodal SAR data collected by sensors operating atmultiple spatial resolutions. The method is based on Markovian probabilistic graphical models, graph cuts, linear mixtures, generalized Gaussian distributions, Gram-Charlier approximations, maximum likelihood and minimum mean squared error estimation. It benefits from the SAR images acquired at multiple spatial resolutions and with possibly different modalities on the considered acquisition times to generate an output change map at the finest observed resolution. This is accomplished by modeling the statistics of the data at the various spatial scales through appropriate generalized Gaussian distributions and by iteratively estimating a set of virtual images that are defined on the pixel grid at the finest resolution and would be collected if all the sensors could work at that resolution. A Markov random field framework is adopted to address the detection problem by defining an appropriate multimodal energy function that is minimized using graph cuts
    • …
    corecore