23 research outputs found

    Fuzzy Superpixels based Semi-supervised Similarity-constrained CNN for PolSAR Image Classification

    Get PDF
    Recently, deep learning has been highly successful in image classification. Labeling the PolSAR data, however, is time-consuming and laborious and in response semi-supervised deep learning has been increasingly investigated in PolSAR image classification. Semi-supervised deep learning methods for PolSAR image classification can be broadly divided into two categories, namely pixels-based methods and superpixels-based methods. Pixels-based semi-supervised methods are liable to be affected by speckle noises and have a relatively high computational complexity. Superpixels-based methods focus on the superpixels and ignore tiny detail-preserving represented by pixels. In this paper, a Fuzzy superpixels based Semi-supervised Similarity-constrained CNN (FS-SCNN) is proposed. To reduce the effect of speckle noises and preserve the details, FS-SCNN uses a fuzzy superpixels algorithm to segment an image into two parts, superpixels and undetermined pixels. Moreover, the fuzzy superpixels algorithm can also reduce the number of mixed superpixels and improve classification performance. To exploit unlabeled data effectively, we also propose a Similarity-constrained Convolutional Neural Network (SCNN) model to assign pseudo labels to unlabeled data. The final training set consists of the initial labeled data and these pseudo labeled data. Three PolSAR images are used to demonstrate the excellent classification performance of the FS-SCNN method with data of limited labels

    Deep learning in remote sensing: a review

    Get PDF
    Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin

    Advanced techniques for classification of polarimetric synthetic aperture radar data

    Get PDF
    With various remote sensing technologies to aid Earth Observation, radar-based imaging is one of them gaining major interests due to advances in its imaging techniques in form of syn-thetic aperture radar (SAR) and polarimetry. The majority of radar applications focus on mon-itoring, detecting, and classifying local or global areas of interests to support humans within their efforts of decision-making, analysis, and interpretation of Earth’s environment. This thesis focuses on improving the classification performance and process particularly concerning the application of land use and land cover over polarimetric SAR (PolSAR) data. To achieve this, three contributions are studied related to superior feature description and ad-vanced machine-learning techniques including classifiers, principles, and data exploitation. First, this thesis investigates the application of color features within PolSAR image classi-fication to provide additional discrimination on top of the conventional scattering information and texture features. The color features are extracted over the visual presentation of fully and partially polarimetric SAR data by generation of pseudo color images. Within the experiments, the obtained results demonstrated that with the addition of the considered color features, the achieved classification performances outperformed results with common PolSAR features alone as well as achieved higher classification accuracies compared to the traditional combination of PolSAR and texture features. Second, to address the large-scale learning challenge in PolSAR image classification with the utmost efficiency, this thesis introduces the application of an adaptive and data-driven supervised classification topology called Collective Network of Binary Classifiers, CNBC. This topology incorporates active learning to support human users with the analysis and interpretation of PolSAR data focusing on collections of images, where changes or updates to the existing classifier might be required frequently due to surface, terrain, and object changes as well as certain variations in capturing time and position. Evaluations demonstrated the capabilities of CNBC over an extensive set of experimental results regarding the adaptation and data-driven classification of single as well as collections of PolSAR images. The experimental results verified that the evolutionary classification topology, CNBC, did provide an efficient solution for the problems of scalability and dynamic adaptability allowing both feature space dimensions and the number of terrain classes in PolSAR image collections to vary dynamically. Third, most PolSAR classification problems are undertaken by supervised machine learn-ing, which require manually labeled ground truth data available. To reduce the manual labeling efforts, supervised and unsupervised learning approaches are combined into semi-supervised learning to utilize the huge amount of unlabeled data. The application of semi-supervised learning in this thesis is motivated by ill-posed classification tasks related to the small training size problem. Therefore, this thesis investigates how much ground truth is actually necessary for certain classification problems to achieve satisfactory results in a supervised and semi-supervised learning scenario. To address this, two semi-supervised approaches are proposed by unsupervised extension of the training data and ensemble-based self-training. The evaluations showed that significant speed-ups and improvements in classification performance are achieved. In particular, for a remote sensing application such as PolSAR image classification, it is advantageous to exploit the location-based information from the labeled training data. Each of the developed techniques provides its stand-alone contribution from different viewpoints to improve land use and land cover classification. The introduction of a new fea-ture for better discrimination is independent of the underlying classification algorithms used. The application of the CNBC topology is applicable to various classification problems no matter how the underlying data have been acquired, for example in case of remote sensing data. Moreover, the semi-supervised learning approach tackles the challenge of utilizing the unlabeled data. By combining these techniques for superior feature description and advanced machine-learning techniques exploiting classifier topologies and data, further contributions to polarimetric SAR image classification are made. According to the performance evaluations conducted including visual and numerical assessments, the proposed and investigated tech-niques showed valuable improvements and are able to aid the analysis and interpretation of PolSAR image data. Due to the generic nature of the developed techniques, their applications to other remote sensing data will require only minor adjustments

    Classification of Compact Polarimetric Synthetic Aperture Radar Images

    Get PDF
    The RADARSAT Constellation Mission (RCM) was launched in June 2019. RCM, in addition to dual-polarization (DP) and fully quad-polarimetric (QP) imaging modes, provides compact polarimetric (CP) mode data. A CP synthetic aperture radar (SAR) is a coherent DP system in which a single circular polarization is transmitted followed by the reception in two orthogonal linear polarizations. A CP SAR fully characterizes the backscattered field using the Stokes parameters, or equivalently, the complex coherence matrix. This is the main advantage of a CP SAR over the traditional (non-coherent) DP SAR. Therefore, designing scene segmentation and classification methods using CP complex coherence matrix data is advocated in this thesis. Scene classification of remotely captured images is an important task in monitoring the Earth's surface. The high-resolution RCM CP SAR data can be used for land cover classification as well as sea-ice mapping. Mapping sea ice formed in ocean bodies is important for ship navigation and climate change modeling. The Canadian Ice Service (CIS) has expert ice analysts who manually generate sea-ice maps of Arctic areas on a daily basis. An automated sea-ice mapping process that can provide detailed yet reliable maps of ice types and water is desirable for CIS. In addition to linear DP SAR data in ScanSAR mode (500km), RCM wide-swath CP data (350km) can also be used in operational sea-ice mapping of the vast expanses in the Arctic areas. The smaller swath coverage of QP SAR data (50km) is the reason why the use of QP SAR data is limited for sea-ice mapping. This thesis involves the design and development of CP classification methods that consist of two steps: an unsupervised segmentation of CP data to identify homogeneous regions (superpixels) and a labeling step where a ground truth label is assigned to each super-pixel. An unsupervised segmentation algorithm is developed based on the existing Iterative Region Growing using Semantics (IRGS) for CP data and is called CP-IRGS. The constituents of feature model and spatial context model energy terms in CP-IRGS are developed based on the statistical properties of CP complex coherence matrix data. The superpixels generated by CP-IRGS are then used in a graph-based labeling method that incorporates the global spatial correlation among super-pixels in CP data. The classifications of sea-ice and land cover types using test scenes indicate that (a) CP scenes provide improved sea-ice classification than the linear DP scenes, (b) CP-IRGS performs more accurate segmentation than that using only CP channel intensity images, and (c) using global spatial information (provided by a graph-based labeling approach) provides an improvement in classification accuracy values over methods that do not exploit global spatial correlation

    Adaptive Fuzzy Learning Superpixel Representation for PolSAR Image Classification

    Get PDF
    The increasing applications of polarimetric synthetic aperture radar (PolSAR) image classification demand for effective superpixels’ algorithms. Fuzzy superpixels’ algorithms reduce the misclassification rate by dividing pixels into superpixels, which are groups of pixels of homogenous appearance and undetermined pixels. However, two key issues remain to be addressed in designing a fuzzy superpixel algorithm for PolSAR image classification. First, the polarimetric scattering information, which is unique in PolSAR images, is not effectively used. Such information can be utilized to generate superpixels more suitable for PolSAR images. Second, the ratio of undetermined pixels is fixed for each image in the existing techniques, ignoring the fact that the difficulty of classifying different objects varies in an image. To address these two issues, we propose a polarimetric scattering information-based adaptive fuzzy superpixel (AFS) algorithm for PolSAR images classification. In AFS, the correlation between pixels’ polarimetric scattering information, for the first time, is considered through fuzzy rough set theory to generate superpixels. This correlation is further used to dynamically and adaptively update the ratio of undetermined pixels. AFS is evaluated extensively against different evaluation metrics and compared with the state-of-the-art superpixels’ algorithms on three PolSAR images. The experimental results demonstrate the superiority of AFS on PolSAR image classification problems

    Classification of Polarimetric SAR Images Using Compact Convolutional Neural Networks

    Get PDF
    Classification of polarimetric synthetic aperture radar (PolSAR) images is an active research area with a major role in environmental applications. The traditional Machine Learning (ML) methods proposed in this domain generally focus on utilizing highly discriminative features to improve the classification performance, but this task is complicated by the well-known "curse of dimensionality" phenomena. Other approaches based on deep Convolutional Neural Networks (CNNs) have certain limitations and drawbacks, such as high computational complexity, an unfeasibly large training set with ground-truth labels, and special hardware requirements. In this work, to address the limitations of traditional ML and deep CNN based methods, a novel and systematic classification framework is proposed for the classification of PolSAR images, based on a compact and adaptive implementation of CNNs using a sliding-window classification approach. The proposed approach has three advantages. First, there is no requirement for an extensive feature extraction process. Second, it is computationally efficient due to utilized compact configurations. In particular, the proposed compact and adaptive CNN model is designed to achieve the maximum classification accuracy with minimum training and computational complexity. This is of considerable importance considering the high costs involved in labelling in PolSAR classification. Finally, the proposed approach can perform classification using smaller window sizes than deep CNNs. Experimental evaluations have been performed over the most commonly-used four benchmark PolSAR images: AIRSAR L-Band and RADARSAT-2 C-Band data of San Francisco Bay and Flevoland areas. Accordingly, the best obtained overall accuracies range between 92.33 - 99.39% for these benchmark study sites

    Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources

    Get PDF
    Central to the looming paradigm shift toward data-intensive science, machine-learning techniques are becoming increasingly important. In particular, deep learning has proven to be both a major breakthrough and an extremely powerful tool in many fields. Shall we embrace deep learning as the key to everything? Or should we resist a black-box solution? These are controversial issues within the remote-sensing community. In this article, we analyze the challenges of using deep learning for remote-sensing data analysis, review recent advances, and provide resources we hope will make deep learning in remote sensing seem ridiculously simple. More importantly, we encourage remote-sensing scientists to bring their expertise into deep learning and use it as an implicit general model to tackle unprecedented, large-scale, influential challenges, such as climate change and urbanization

    Segmentation and Classification of Multimodal Imagery

    Get PDF
    Segmentation and classification are two important computer vision tasks that transform input data into a compact representation that allow fast and efficient analysis. Several challenges exist in generating accurate segmentation or classification results. In a video, for example, objects often change the appearance and are partially occluded, making it difficult to delineate the object from its surroundings. This thesis proposes video segmentation and aerial image classification algorithms to address some of the problems and provide accurate results. We developed a gradient driven three-dimensional segmentation technique that partitions a video into spatiotemporal objects. The algorithm utilizes the local gradient computed at each pixel location together with the global boundary map acquired through deep learning methods to generate initial pixel groups by traversing from low to high gradient regions. A local clustering method is then employed to refine these initial pixel groups. The refined sub-volumes in the homogeneous regions of video are selected as initial seeds and iteratively combined with adjacent groups based on intensity similarities. The volume growth is terminated at the color boundaries of the video. The over-segments obtained from the above steps are then merged hierarchically by a multivariate approach yielding a final segmentation map for each frame. In addition, we also implemented a streaming version of the above algorithm that requires a lower computational memory. The results illustrate that our proposed methodology compares favorably well, on a qualitative and quantitative level, in segmentation quality and computational efficiency with the latest state of the art techniques. We also developed a convolutional neural network (CNN)-based method to efficiently combine information from multisensor remotely sensed images for pixel-wise semantic classification. The CNN features obtained from multiple spectral bands are fused at the initial layers of deep neural networks as opposed to final layers. The early fusion architecture has fewer parameters and thereby reduces the computational time and GPU memory during training and inference. We also introduce a composite architecture that fuses features throughout the network. The methods were validated on four different datasets: ISPRS Potsdam, Vaihingen, IEEE Zeebruges, and Sentinel-1, Sentinel-2 dataset. For the Sentinel-1,-2 datasets, we obtain the ground truth labels for three classes from OpenStreetMap. Results on all the images show early fusion, specifically after layer three of the network, achieves results similar to or better than a decision level fusion mechanism. The performance of the proposed architecture is also on par with the state-of-the-art results

    General model-based decomposition framework for polarimetric SAR images

    Get PDF
    2017 Spring.Includes bibliographical references.Polarimetric synthetic aperture radars emit a signal and measure the magnitude, phase, and polarization of the return. Polarimetric decompositions are used to extract physically meaningful attributes of the scatterers. Of these, model-based decompositions intend to model the measured data with canonical scatter-types. Many advances have been made to this field of model-based decomposition and this work is surveyed by the first portion of this dissertation. A general model-based decomposition framework (GMBDF) is established that can decompose polarimetric data with different scatter-types and evaluate how well those scatter-types model the data by comparing a residual term. The GMBDF solves for all the scatter-type parameters simultaneously that are within a given decomposition by minimizing the residual term. A decomposition with a lower residual term contains better scatter-type models for the given data. An example is worked through that compares two decompositions with different surface scatter-type models. As an application of the polarimetric decomposition analysis, a novel terrain classification algorithm of polSAR images is proposed. In the algorithm, the results of state-of-the-art polarimetric decompositions are processed for an image. Pixels are then selected to represent different terrain classes. Distributions of the parameters of these selected pixels are determined for each class. Each pixel in the image is given a score according to how well its parameters fit the parameter distributions of each class. Based on this score, the pixel is either assigned to a predefined terrain class or labeled unclassified
    corecore