40 research outputs found

    Unsupervised Classification of Polarimetric SAR Images via Riemannian Sparse Coding

    Get PDF
    Unsupervised classification plays an important role in understanding polarimetric synthetic aperture radar (PolSAR) images. One of the typical representations of PolSAR data is in the form of Hermitian positive definite (HPD) covariance matrices. Most algorithms for unsupervised classification using this representation either use statistical distribution models or adopt polarimetric target decompositions. In this paper, we propose an unsupervised classification method by introducing a sparsity-based similarity measure on HPD matrices. Specifically, we first use a novel Riemannian sparse coding scheme for representing each HPD covariance matrix as sparse linear combinations of other HPD matrices, where the sparse reconstruction loss is defined by the Riemannian geodesic distance between HPD matrices. The coefficient vectors generated by this step reflect the neighborhood structure of HPD matrices embedded in the Euclidean space and hence can be used to define a similarity measure. We apply the scheme for PolSAR data, in which we first oversegment the images into superpixels, followed by representing each superpixel by an HPD matrix. These HPD matrices are then sparse coded, and the resulting sparse coefficient vectors are then clustered by spectral clustering using the neighborhood matrix generated by our similarity measure. The experimental results on different fully PolSAR images demonstrate the superior performance of the proposed classification approach against the state-of-the-art approachesThis work was supported in part by the National Natural Science Foundation of China under Grant 61331016 and Grant 61271401 and in part by the National Key Basic Research and Development Program of China under Contract 2013CB733404. The work of A. Cherian was supported by the Australian Research Council Centre of Excellence for Robotic Vision under Project CE140100016.

    Classification of Polarimetric SAR Images Using Compact Convolutional Neural Networks

    Get PDF
    Classification of polarimetric synthetic aperture radar (PolSAR) images is an active research area with a major role in environmental applications. The traditional Machine Learning (ML) methods proposed in this domain generally focus on utilizing highly discriminative features to improve the classification performance, but this task is complicated by the well-known "curse of dimensionality" phenomena. Other approaches based on deep Convolutional Neural Networks (CNNs) have certain limitations and drawbacks, such as high computational complexity, an unfeasibly large training set with ground-truth labels, and special hardware requirements. In this work, to address the limitations of traditional ML and deep CNN based methods, a novel and systematic classification framework is proposed for the classification of PolSAR images, based on a compact and adaptive implementation of CNNs using a sliding-window classification approach. The proposed approach has three advantages. First, there is no requirement for an extensive feature extraction process. Second, it is computationally efficient due to utilized compact configurations. In particular, the proposed compact and adaptive CNN model is designed to achieve the maximum classification accuracy with minimum training and computational complexity. This is of considerable importance considering the high costs involved in labelling in PolSAR classification. Finally, the proposed approach can perform classification using smaller window sizes than deep CNNs. Experimental evaluations have been performed over the most commonly-used four benchmark PolSAR images: AIRSAR L-Band and RADARSAT-2 C-Band data of San Francisco Bay and Flevoland areas. Accordingly, the best obtained overall accuracies range between 92.33 - 99.39% for these benchmark study sites

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    Statistical modeling of polarimetric SAR data: a survey and challenges

    Get PDF
    Knowledge of the exact statistical properties of the signal plays an important role in the applications of Polarimetric Synthetic Aperture Radar (PolSAR) data. In the last three decades, a considerable research effort has been devoted to finding accurate statistical models for PolSAR data, and a number of distributions have been proposed. In order to see the differences of various models and to make a comparison among them, a survey is provided in this paper. Texture models, which could capture the non-Gaussian behavior observed in high resolution data, and yet keep a compact mathematical form, are mainly explained. Probability density functions for the single look data and the multilook data are reviewed, as well as the advantages and applicable context of those models. As a summary, challenges in the area of statistical analysis of PolSAR data are also discussed.Peer ReviewedPostprint (published version

    Radar satellite imagery for humanitarian response. Bridging the gap between technology and application

    Get PDF
    This work deals with radar satellite imagery and its potential to assist of humanitarian operations. As the number of displaced people annually increases, both hosting countries and relief organizations face new challenges which are often related to unclear situations and lack of information on the number and location of people in need, as well as their environments. It was demonstrated in numerous studies that methods of earth observation can deliver this important information for the management of crises, the organization of refugee camps, and the mapping of environmental resources and natural hazards. However, most of these studies make use of -high-resolution optical imagery, while the role of radar satellites is widely neglected. At the same time, radar sensors have characteristics which make them highly suitable for humanitarian response, their potential to capture images through cloud cover and at night in the first place. Consequently, they potentially allow quicker response in cases of emergencies than optical imagery. This work demonstrates the currently unused potential of radar imagery for the assistance of humanitarian operations by case studies which cover the information needs of specific emergency situations. They are thematically grouped into topics related to population, natural hazards and the environment. Furthermore, the case studies address different levels of scientific objectives: The main intention is the development of innovative techniques of digital image processing and geospatial analysis as an answer on the identified existing research gaps. For this reason, novel approaches are presented on the mapping of refugee camps and urban areas, the allocation of biomass and environmental impact assessment. Secondly, existing methods developed for radar imagery are applied, refined, or adapted to specifically demonstrate their benefit in a humanitarian context. This is done for the monitoring of camp growth, the assessment of damages in cities affected by civil war, and the derivation of areas vulnerable to flooding or sea-surface changes. Lastly, to foster the integration of radar images into existing operational workflows of humanitarian data analysis, technically simple and easily-adaptable approaches are suggested for the mapping of rural areas for vaccination campaigns, the identification of changes within and around refugee camps, and the assessment of suitable locations for groundwater drillings. While the studies provide different levels of technical complexity and novelty, they all show that radar imagery can largely contribute to the provision of a variety of information which is required to make solid decisions and to effectively provide help in humanitarian operations. This work furthermore demonstrates that radar images are more than just an alternative image source for areas heavily affected by cloud cover. In fact, what makes them valuable is their information content regarding the characteristics of surfaces, such as shape, orientation, roughness, size, height, moisture, or conductivity. All these give decisive insights about man-made and natural environments in emergency situations and cannot be provided by optical images Finally, the findings of the case studies are put into a larger context, discussing the observed potential and limitations of the presented approaches. The major challenges are summarized which need be addressed to make radar imagery more useful in humanitarian operations in the context of upcoming technical developments. New radar satellites and technological progress in the fields of machine learning and cloud computing will bring new opportunities. At the same time, this work demonstrated the large need for further research, as well as for the collaboration and transfer of knowledge and experiences between scientists, users and relief workers in the field. It is the first extensive scientific compilation of this topic and the first step for a sustainable integration of radar imagery into operational frameworks to assist humanitarian work and to contribute to a more efficient provision of help to those in need.Die vorliegende Arbeit beschäftigt sich mit bildgebenden Radarsatelliten und ihrem potenziellen Beitrag zur Unterstützung humanitärer Einsätze. Die jährlich zunehmende Zahl an vertriebenen oder geflüchteten Menschen stellt sowohl Aufnahmeländer als auch humanitäre Organisationen vor große Herausforderungen, da sie oft mit unübersichtlichen Verhältnissen konfrontiert sind. Effektives Krisenmanagement, die Planung und Versorgung von Flüchtlingslagern, sowie der Schutz der betroffenen Menschen erfordern jedoch verlässliche Angaben über Anzahl und Aufenthaltsort der Geflüchteten und ihrer natürlichen Umwelt. Die Bereitstellung dieser Informationen durch Satellitenbilder wurde bereits in zahlreichen Studien aufgezeigt. Sie beruhen in der Regel auf hochaufgelösten optischen Aufnahmen, während bildgebende Radarsatelliten bisher kaum Anwendung finden. Dabei verfügen gerade Radarsatelliten über Eigenschaften, die hilfreich für humanitäre Einsätze sein können, allen voran ihre Unabhängigkeit von Bewölkung oder Tageslicht. Dadurch ermöglichen sie in Krisenfällen verglichen mit optischen Satelliten eine schnellere Reaktion. Diese Arbeit zeigt das derzeit noch ungenutzte Potenzial von Radardaten zur Unterstützung humanitärer Arbeit anhand von Fallstudien auf, in denen konkrete Informationen für ausgewählte Krisensituationen bereitgestellt werden. Sie sind in die Themenbereiche Bevölkerung, Naturgefahren und Ressourcen aufgeteilt, adressieren jedoch unterschiedliche wissenschaftliche Ansprüche: Der Hauptfokus der Arbeit liegt auf der Entwicklung von innovativen Methoden zur Verarbeitung von Radarbildern und räumlichen Daten als Antwort auf den identifizierten Forschungsbedarf in diesem Gebiet. Dies wird anhand der Kartierung von Flüchtlingslagern zur Abschätzung ihrer Bevölkerung, zur Bestimmung von Biomasse, sowie zur Ermittlung des Umwelteinflusses von Flüchtlingslagern aufgezeigt. Darüber hinaus werden existierende oder erprobte Ansätze für die Anwendung im humanitären Kontext angepasst oder weiterentwickelt. Dies erfolgt im Rahmen von Fallstudien zur Dynamik von Flüchtlingslagern, zur Ermittlung von Schäden an Gebäuden in Kriegsgebieten, sowie zur Erkennung von Risiken durch Überflutung. Zuletzt soll die Integration von Radardaten in bereits existierende Abläufe oder Arbeitsroutinen in der humanitären Hilfe anhand technisch vergleichsweise einfacher Ansätze vorgestellt und angeregt werden. Als Beispiele dienen hier die radargestützte Kartierung von entlegenen Gebieten zur Unterstützung von Impfkampagnen, die Identifizierung von Veränderungen in Flüchtlingslagern, sowie die Auswahl geeigneter Standorte zur Grundwasserentnahme. Obwohl sich die Fallstudien hinsichtlich ihres Innovations- und Komplexitätsgrads unterscheiden, zeigen sie alle den Mehrwert von Radardaten für die Bereitstellung von Informationen, um schnelle und fundierte Planungsentscheidungen zu unterstützen. Darüber hinaus wird in dieser Arbeit deutlich, dass Radardaten für humanitäre Zwecke mehr als nur eine Alternative in stark bewölkten Gebieten sind. Durch ihren Informationsgehalt zur Beschaffenheit von Oberflächen, beispielsweise hinsichtlich ihrer Rauigkeit, Feuchte, Form, Größe oder Höhe, sind sie optischen Daten überlegen und daher für viele Anwendungsbereiche im Kontext humanitärer Arbeit besonders. Die in den Fallstudien gewonnenen Erkenntnisse werden abschließend vor dem Hintergrund von Vor- und Nachteilen von Radardaten, sowie hinsichtlich zukünftiger Entwicklungen und Herausforderungen diskutiert. So versprechen neue Radarsatelliten und technologische Fortschritte im Bereich der Datenverarbeitung großes Potenzial. Gleichzeitig unterstreicht die Arbeit einen großen Bedarf an weiterer Forschung, sowie an Austausch und Zusammenarbeit zwischen Wissenschaftlern, Anwendern und Einsatzkräften vor Ort. Die vorliegende Arbeit ist die erste umfassende Darstellung und wissenschaftliche Aufarbeitung dieses Themenkomplexes. Sie soll als Grundstein für eine langfristige Integration von Radardaten in operationelle Abläufe dienen, um humanitäre Arbeit zu unterstützen und eine wirksame Hilfe für Menschen in Not ermöglichen

    Crop monitoring and yield estimation using polarimetric SAR and optical satellite data in southwestern Ontario

    Get PDF
    Optical satellite data have been proven as an efficient source to extract crop information and monitor crop growth conditions over large areas. In local- to subfield-scale crop monitoring studies, both high spatial resolution and high temporal resolution of the image data are important. However, the acquisition of optical data is limited by the constant contamination of clouds in cloudy areas. This thesis explores the potential of polarimetric Synthetic Aperture Radar (SAR) satellite data and the spatio-temporal data fusion approach in crop monitoring and yield estimation applications in southwestern Ontario. Firstly, the sensitivity of 16 parameters derived from C-band Radarsat-2 polarimetric SAR data to crop height and fractional vegetation cover (FVC) was investigated. The results show that the SAR backscatters are affected by many factors unrelated to the crop canopy such as the incidence angle and the soil background and the degree of sensitivity varies with the crop types, growing stages, and the polarimetric SAR parameters. Secondly, the Minimum Noise Fraction (MNF) transformation, for the first time, was applied to multitemporal Radarsat-2 polarimetric SAR data in cropland area mapping based on the random forest classifier. An overall classification accuracy of 95.89% was achieved using the MNF transformation of the multi-temporal coherency matrix acquired from July to November. Then, a spatio-temporal data fusion method was developed to generate Normalized Difference Vegetation Index (NDVI) time series with both high spatial and high temporal resolution in heterogeneous regions using Landsat and MODIS imagery. The proposed method outperforms two other widely used methods. Finally, an improved crop phenology detection method was proposed, and the phenology information was then forced into the Simple Algorithm for Yield Estimation (SAFY) model to estimate crop biomass and yield. Compared with the SAFY model without forcing the remotely sensed phenology and a simple light use efficiency (LUE) model, the SAFY incorporating the remotely sensed phenology can improve the accuracy of biomass estimation by about 4% in relative Root Mean Square Error (RRMSE). The studies in this thesis improve the ability to monitor crop growth status and production at subfield scale

    Convolutional Neural Networks - Generalizability and Interpretations

    Get PDF

    Dual and single polarized sar image classification using compact convolutional neural networks

    Get PDF
    Accurate land use/land cover classification of synthetic aperture radar (SAR) images plays an important role in environmental, economic, and nature related research areas and applications. When fully polarimetric SAR data is not available, single- or dual-polarization SAR data can also be used whilst posing certain difficulties. For instance, traditional Machine Learning (ML) methods generally focus on finding more discriminative features to overcome the lack of information due to single- or dual-polarimetry. Beside conventional ML approaches, studies proposing deep convolutional neural networks (CNNs) come with limitations and drawbacks such as requirements of massive amounts of data for training and special hardware for implementing complex deep networks. In this study, we propose a systematic approach based on sliding-window classification with compact and adaptive CNNs that can overcome such drawbacks whilst achieving state-of-the-art performance levels for land use/land cover classification. The proposed approach voids the need for feature extraction and selection processes entirely, and perform classification directly over SAR intensity data. Furthermore, unlike deep CNNs, the proposed approach requires neither a dedicated hardware nor a large amount of data with ground-truth labels. The proposed systematic approach is designed to achieve maximum classification accuracy on single and dual-polarized intensity data with minimum human interaction. Moreover, due to its compact configuration, the proposed approach can process such small patches which is not possible with deep learning solutions. This ability significantly improves the details in segmentation masks. An extensive set of experiments over two benchmark SAR datasets confirms the superior classification performance and efficient computational complexity of the proposed approach compared to the competing methods. - 2019 by the authors.Scopu
    corecore