14 research outputs found

    Semi-supervised classification of polarimetric SAR images using Markov random field and two-level Wishart mixture model

    Get PDF
    In this work, we propose a semi-supervised method for classification of polarimetric synthetic aperture radar (PolSAR) images. In the proposed method, a 2-level mixture model is constructed by associating each component density with a unique Wishart mixture model (instead of a single Wishart distribution as that in the conventional Wishart mixture model). This modeling scheme facilitates the accurate description of data for the categories, each of which includes multiple subcategories. The learning algorithm for the proposed model is developed based on variational inference and all the update equations are obtained in closed form. In the learning algorithm, the spatial interdependencies are incorporated by imposing a Markov random field prior on the indicator variable to alleviate the speckle effect on the classification results. The experimental results demonstrate the improved performance of the proposed method compared with the unsupervised version and supervised version of the proposed model as well as an existing method for semi-supervised classification

    Unsupervised classification of multilook polarimetric SAR data using spatially variant wishart mixture model with double constraints

    Get PDF
    This paper addresses the unsupervised classification problems for multilook Polarimetric synthetic aperture radar (PolSAR) images by proposing a patch-level spatially variant Wishart mixture model (SVWMM) with double constraints. We construct this model by jointly modeling the pixels in a patch (rather than an individual pixel) so as to effectively capture the local correlation in the PolSAR images. More importantly, a responsibility parameter is introduced to the proposed model, providing not only the possibility to represent the importance of different pixels within a patch but also the additional flexibility for incorporating the spatial information. As such, double constraints are further imposed by simultaneously utilizing the similarities of the neighboring pixels, respectively, defined on two different parameter spaces (i.e., the hyperparameter in the posterior distribution of mixing coefficients and the responsibility parameter). Furthermore, the variational inference algorithm is developed to achieve effective learning of the proposed SVWMM with the closed-form updates, facilitating the automatic determination of the cluster number. Experimental results on several PolSAR data sets from both airborne and spaceborne sensors demonstrate that the proposed method is effective and it enables better performances on unsupervised classification than the conventional methods

    Multi-frequency PolSAR Image Fusion Classification Based on Semantic Interactive Information and Topological Structure

    Full text link
    Compared with the rapid development of single-frequency multi-polarization SAR image classification technology, there is less research on the land cover classification of multifrequency polarimetric SAR (MF-PolSAR) images. In addition, the current deep learning methods for MF-PolSAR classification are mainly based on convolutional neural networks (CNNs), only local spatiality is considered but the nonlocal relationship is ignored. Therefore, based on semantic interaction and nonlocal topological structure, this paper proposes the MF semantics and topology fusion network (MF-STFnet) to improve MF-PolSAR classification performance. In MF-STFnet, two kinds of classification are implemented for each band, semantic information-based (SIC) and topological property-based (TPC). They work collaboratively during MF-STFnet training, which can not only fully leverage the complementarity of bands, but also combine local and nonlocal spatial information to improve the discrimination between different categories. For SIC, the designed crossband interactive feature extraction module (CIFEM) is embedded to explicitly model the deep semantic correlation among bands, thereby leveraging the complementarity of bands to make ground objects more separable. For TPC, the graph sample and aggregate network (GraphSAGE) is employed to dynamically capture the representation of nonlocal topological relations between land cover categories. In this way, the robustness of classification can be further improved by combining nonlocal spatial information. Finally, an adaptive weighting fusion (AWF) strategy is proposed to merge inference from different bands, so as to make the MF joint classification decisions of SIC and TPC. The comparative experiments show that MF-STFnet can achieve more competitive classification performance than some state-of-the-art methods

    Advanced machine learning algorithms for Canadian wetland mapping using polarimetric synthetic aperture radar (PolSAR) and optical imagery

    Get PDF
    Wetlands are complex land cover ecosystems that represent a wide range of biophysical conditions. They are one of the most productive ecosystems and provide several important environmental functionalities. As such, wetland mapping and monitoring using cost- and time-efficient approaches are of great interest for sustainable management and resource assessment. In this regard, satellite remote sensing data are greatly beneficial, as they capture a synoptic and multi-temporal view of landscapes. The ability to extract useful information from satellite imagery greatly affects the accuracy and reliability of the final products. This is of particular concern for mapping complex land cover ecosystems, such as wetlands, where complex, heterogeneous, and fragmented landscape results in similar backscatter/spectral signatures of land cover classes in satellite images. Accordingly, the overarching purpose of this thesis is to contribute to existing methodologies of wetland classification by proposing and developing several new techniques based on advanced remote sensing tools and optical and Synthetic Aperture Radar (SAR) imagery. Specifically, the importance of employing an efficient speckle reduction method for polarimetric SAR (PolSAR) image processing is discussed and a new speckle reduction technique is proposed. Two novel techniques are also introduced for improving the accuracy of wetland classification. In particular, a new hierarchical classification algorithm using multi-frequency SAR data is proposed that discriminates wetland classes in three steps depending on their complexity and similarity. The experimental results reveal that the proposed method is advantageous for mapping complex land cover ecosystems compared to single stream classification approaches, which have been extensively used in the literature. Furthermore, a new feature weighting approach is proposed based on the statistical and physical characteristics of PolSAR data to improve the discrimination capability of input features prior to incorporating them into the classification scheme. This study also demonstrates the transferability of existing classification algorithms, which have been developed based on RADARSAT-2 imagery, to compact polarimetry SAR data that will be collected by the upcoming RADARSAT Constellation Mission (RCM). The capability of several well-known deep Convolutional Neural Network (CNN) architectures currently employed in computer vision is first introduced in this thesis for classification of wetland complexes using multispectral remote sensing data. Finally, this research results in the first provincial-scale wetland inventory maps of Newfoundland and Labrador using the Google Earth Engine (GEE) cloud computing resources and open access Earth Observation (EO) collected by the Copernicus Sentinel missions. Overall, the methodologies proposed in this thesis address fundamental limitations/challenges of wetland mapping using remote sensing data, which have been ignored in the literature. These challenges include the backscattering/spectrally similar signature of wetland classes, insufficient classification accuracy of wetland classes, and limitations of wetland mapping on large scales. In addition to the capabilities of the proposed methods for mapping wetland complexes, the use of these developed techniques for classifying other complex land cover types beyond wetlands, such as sea ice and crop ecosystems, offers a potential avenue for further research

    Sparsity enhanced MRF algorithm for automatic object detection in GPR imagery

    Get PDF
    This study addressed the problem of automated object detection from ground penetrating radar imaging (GPR), using the concept of sparse representation. The detection task is first formulated as a Markov random field (MRF) process. Then, we propose a novel detection algorithm by introducing the sparsity constraint to the standard MRF model. Specifically, the traditional approach finds it difficult to determine the central target due to the influence of different neighbors from the imaging area. As such, we introduce a domain search algorithm to overcome this issue and increase the accuracy of target detection. Additionally, in the standard MRF model, the Gibbs parameters are empirically predetermined and fixed during the detection process, yet those hyperparameters may have a significant effect on the performance of the detection. Accordingly, in this paper, Gibbs parameters are self-adaptive and fine-tuned using an iterative updating strategy followed the concept of sparse representation. Furthermore, the proposed algorithm has then been proven to have a strong convergence property theoretically. Finally, we verify the proposed method using a real-world dataset, with a set of ground penetrating radar antennas in three different transmitted frequencies (50 MHz, 200 MHz and 300 MHz). Experimental evaluations demonstrate the advantages of utilizing the proposed algorithm to detect objects in ground penetrating radar imagery, in comparison with four traditional detection algorithms

    Statistical and Machine Learning Models for Remote Sensing Data Mining - Recent Advancements

    Get PDF
    This book is a reprint of the Special Issue entitled "Statistical and Machine Learning Models for Remote Sensing Data Mining - Recent Advancements" that was published in Remote Sensing, MDPI. It provides insights into both core technical challenges and some selected critical applications of satellite remote sensing image analytics

    Processing of optic and radar images.Application in satellite remote sensing of snow, ice and glaciers

    Get PDF
    Ce document présente une synthèse de mes activités de recherche depuis la soutenance de ma thèse en 1999. L'activité rapportée ici est celle d'un ingénieur de recherche, et donc s'est déroulée en parallèle d'une activité ``technique'' comprenant des taches d'instrumentation en laboratoire, d'instrumentation de plateformes en montagne, de raids scientifiques sur les calottes polaires, d'élaboration de projets scientifiques, d'organisation d'équipes ou d'ordre administratif. Je suis Ingénieur de recherche CNRS depuis 2004 affecté au laboratoire Gipsa-lab, une unité mixte de recherche du CNRS, de Grenoble-INP, de l'université Joseph Fourier et de l'université Stendhal. Ce laboratoire (d'environ 400 personnes), conventionné avec l'INRIA, l'Observatoire de Grenoble et l'université Pierre Mendès France, est pluridisciplinaire et développe des recherches fondamentales et finalisées sur les signaux et les systèmes complexes.}Lors de la préparation de ma thèse (mi-temps 1995-99) au LGGE, je me suis intéressé au traitement des images de microstructures de la neige, du névé et de la glace. C'est assez naturellement que j'ai rejoint le laboratoire LIS devenu Gipsa-lab pour y développer des activités de traitement des images Radar à Synthèse d'Ouverture (RSO) appliqué aux milieux naturels neige, glace et glaciers. Etant le premier à générer un interférogramme différentiel des glaciers des Alpes, j'ai continué à travailler sur la phase interférométrique pour extraire des informations de déplacement et valider ces méthodes sur le glacier d'Argentière (massif du Mont-Blanc) qui présente l'énorme avantage de se déplacer de quelques centimètres par jour. Ces activités m'ont amené à développer, en collaboration avec les laboratoires LISTIC, LTCI et IETR, des méthodes plus générales pour extraire des informations dans les images RSO.Ma formation initiale en électronique, puis de doctorat en physique m'ont amené à mettre à profit mes connaissances en traitement d'images et des signaux, en électromagnétisme, en calcul numérique, en informatique et en physique de la neige et de la glace pour étudier les problèmes de traitement des images RSO appliqués à la glace, aux glaciers et à la neige

    Multi-source Remote Sensing for Forest Characterization and Monitoring

    Full text link
    As a dominant terrestrial ecosystem of the Earth, forest environments play profound roles in ecology, biodiversity, resource utilization, and management, which highlights the significance of forest characterization and monitoring. Some forest parameters can help track climate change and quantify the global carbon cycle and therefore attract growing attention from various research communities. Compared with traditional in-situ methods with expensive and time-consuming field works involved, airborne and spaceborne remote sensors collect cost-efficient and consistent observations at global or regional scales and have been proven to be an effective way for forest monitoring. With the looming paradigm shift toward data-intensive science and the development of remote sensors, remote sensing data with higher resolution and diversity have been the mainstream in data analysis and processing. However, significant heterogeneities in the multi-source remote sensing data largely restrain its forest applications urging the research community to come up with effective synergistic strategies. The work presented in this thesis contributes to the field by exploring the potential of the Synthetic Aperture Radar (SAR), SAR Polarimetry (PolSAR), SAR Interferometry (InSAR), Polarimetric SAR Interferometry (PolInSAR), Light Detection and Ranging (LiDAR), and multispectral remote sensing in forest characterization and monitoring from three main aspects including forest height estimation, active fire detection, and burned area mapping. First, the forest height inversion is demonstrated using airborne L-band dual-baseline repeat-pass PolInSAR data based on modified versions of the Random Motion over Ground (RMoG) model, where the scattering attenuation and wind-derived random motion are described in conditions of homogeneous and heterogeneous volume layer, respectively. A boreal and a tropical forest test site are involved in the experiment to explore the flexibility of different models over different forest types and based on that, a leveraging strategy is proposed to boost the accuracy of forest height estimation. The accuracy of the model-based forest height inversion is limited by the discrepancy between the theoretical models and actual scenarios and exhibits a strong dependency on the system and scenario parameters. Hence, high vertical accuracy LiDAR samples are employed to assist the PolInSAR-based forest height estimation. This multi-source forest height estimation is reformulated as a pan-sharpening task aiming to generate forest heights with high spatial resolution and vertical accuracy based on the synergy of the sparse LiDAR-derived heights and the information embedded in the PolInSAR data. This process is realized by a specifically designed generative adversarial network (GAN) allowing high accuracy forest height estimation less limited by theoretical models and system parameters. Related experiments are carried out over a boreal and a tropical forest to validate the flexibility of the method. An automated active fire detection framework is proposed for the medium resolution multispectral remote sensing data. The basic part of this framework is a deep-learning-based semantic segmentation model specifically designed for active fire detection. A dataset is constructed with open-access Sentinel-2 imagery for the training and testing of the deep-learning model. The developed framework allows an automated Sentinel-2 data download, processing, and generation of the active fire detection results through time and location information provided by the user. Related performance is evaluated in terms of detection accuracy and processing efficiency. The last part of this thesis explored whether the coarse burned area products can be further improved through the synergy of multispectral, SAR, and InSAR features with higher spatial resolutions. A Siamese Self-Attention (SSA) classification is proposed for the multi-sensor burned area mapping and a multi-source dataset is constructed at the object level for the training and testing. Results are analyzed by different test sites, feature sources, and classification methods to assess the improvements achieved by the proposed method. All developed methods are validated with extensive processing of multi-source data acquired by Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR), Land, Vegetation, and Ice Sensor (LVIS), PolSARproSim+, Sentinel-1, and Sentinel-2. I hope these studies constitute a substantial contribution to the forest applications of multi-source remote sensing

    Synthetic Aperture Radar (SAR) Meets Deep Learning

    Get PDF
    This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports
    corecore