350 research outputs found
A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community
In recent years, deep learning (DL), a re-branding of neural networks (NNs),
has risen to the top in numerous areas, namely computer vision (CV), speech
recognition, natural language processing, etc. Whereas remote sensing (RS)
possesses a number of unique challenges, primarily related to sensors and
applications, inevitably RS draws from many of the same theories as CV; e.g.,
statistics, fusion, and machine learning, to name a few. This means that the RS
community should be aware of, if not at the leading edge of, of advancements
like DL. Herein, we provide the most comprehensive survey of state-of-the-art
RS DL research. We also review recent new developments in the DL field that can
be used in DL for RS. Namely, we focus on theories, tools and challenges for
the RS community. Specifically, we focus on unsolved challenges and
opportunities as it relates to (i) inadequate data sets, (ii)
human-understandable solutions for modelling physical phenomena, (iii) Big
Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and
learning algorithms for spectral, spatial and temporal data, (vi) transfer
learning, (vii) an improved theoretical understanding of DL systems, (viii)
high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote
Sensin
Sea Ice Extraction via Remote Sensed Imagery: Algorithms, Datasets, Applications and Challenges
The deep learning, which is a dominating technique in artificial
intelligence, has completely changed the image understanding over the past
decade. As a consequence, the sea ice extraction (SIE) problem has reached a
new era. We present a comprehensive review of four important aspects of SIE,
including algorithms, datasets, applications, and the future trends. Our review
focuses on researches published from 2016 to the present, with a specific focus
on deep learning-based approaches in the last five years. We divided all
relegated algorithms into 3 categories, including classical image segmentation
approach, machine learning-based approach and deep learning-based methods. We
reviewed the accessible ice datasets including SAR-based datasets, the
optical-based datasets and others. The applications are presented in 4 aspects
including climate research, navigation, geographic information systems (GIS)
production and others. It also provides insightful observations and inspiring
future research directions.Comment: 24 pages, 6 figure
Unsupervised Classification of Polarimetric SAR Images via Riemannian Sparse Coding
Unsupervised classification plays an important role in understanding polarimetric synthetic aperture radar (PolSAR) images. One of the typical representations of PolSAR data is in the form of Hermitian positive definite (HPD) covariance matrices. Most algorithms for unsupervised classification using this representation either use statistical distribution models or adopt polarimetric target decompositions. In this paper, we propose an unsupervised classification method by introducing a sparsity-based similarity measure on HPD matrices. Specifically, we first use a novel Riemannian sparse coding scheme for representing each HPD covariance matrix as sparse linear combinations of other HPD matrices, where the sparse reconstruction loss is defined by the Riemannian geodesic distance between HPD matrices. The coefficient vectors generated by this step reflect the neighborhood structure of HPD matrices embedded in the Euclidean space and hence can be used to define a similarity measure. We apply the scheme for PolSAR data, in which we first oversegment the images into superpixels, followed by representing each superpixel by an HPD matrix. These HPD matrices are then sparse coded, and the resulting sparse coefficient vectors are then clustered by spectral clustering using the neighborhood matrix generated by our similarity measure. The experimental results on different fully PolSAR images demonstrate the superior performance of the proposed classification approach against the state-of-the-art approachesThis work was supported in part
by the National Natural Science Foundation of China under Grant 61331016
and Grant 61271401 and in part by the National Key Basic Research and
Development Program of China under Contract 2013CB733404. The work
of A. Cherian was supported by the Australian Research Council Centre of
Excellence for Robotic Vision under Project CE140100016.
Image Restoration for Remote Sensing: Overview and Toolbox
Remote sensing provides valuable information about objects or areas from a
distance in either active (e.g., RADAR and LiDAR) or passive (e.g.,
multispectral and hyperspectral) modes. The quality of data acquired by
remotely sensed imaging sensors (both active and passive) is often degraded by
a variety of noise types and artifacts. Image restoration, which is a vibrant
field of research in the remote sensing community, is the task of recovering
the true unknown image from the degraded observed image. Each imaging sensor
induces unique noise types and artifacts into the observed image. This fact has
led to the expansion of restoration techniques in different paths according to
each sensor type. This review paper brings together the advances of image
restoration techniques with particular focuses on synthetic aperture radar and
hyperspectral images as the most active sub-fields of image restoration in the
remote sensing community. We, therefore, provide a comprehensive,
discipline-specific starting point for researchers at different levels (i.e.,
students, researchers, and senior researchers) willing to investigate the
vibrant topic of data restoration by supplying sufficient detail and
references. Additionally, this review paper accompanies a toolbox to provide a
platform to encourage interested students and researchers in the field to
further explore the restoration techniques and fast-forward the community. The
toolboxes are provided in https://github.com/ImageRestorationToolbox.Comment: This paper is under review in GRS
A Tutorial on Speckle Reduction in Synthetic Aperture Radar Images
Speckle is a granular disturbance, usually modeled as a multiplicative noise, that affects synthetic aperture radar (SAR) images, as well as all coherent images. Over the last three decades, several methods have been proposed for the reduction of speckle, or despeckling, in SAR images. Goal of this paper is making a comprehensive review of despeckling methods since their birth, over thirty years ago, highlighting trends and changing approaches over years. The concept of fully developed speckle is explained. Drawbacks of homomorphic filtering are pointed out. Assets of multiresolution despeckling, as opposite to spatial-domain despeckling, are highlighted. Also advantages of undecimated, or stationary, wavelet transforms over decimated ones are discussed. Bayesian estimators and probability density function (pdf) models in both spatial and multiresolution domains are reviewed. Scale-space varying pdf models, as opposite to scale varying models, are promoted. Promising methods following non-Bayesian approaches, like nonlocal (NL) filtering and total variation (TV) regularization, are reviewed and compared to spatial- and wavelet-domain Bayesian filters. Both established and new trends for assessment of despeckling are presented. A few experiments on simulated data and real COSMO-SkyMed SAR images highlight, on one side the costperformance tradeoff of the different methods, on the other side the effectiveness of solutions purposely designed for SAR heterogeneity and not fully developed speckle. Eventually, upcoming methods based on new concepts of signal processing, like compressive sensing, are foreseen as a new generation of despeckling, after spatial-domain and multiresolution-domain method
Adaptive Fuzzy Learning Superpixel Representation for PolSAR Image Classification
The increasing applications of polarimetric synthetic aperture radar (PolSAR) image classification demand for effective superpixels’ algorithms. Fuzzy superpixels’ algorithms reduce the misclassification rate by dividing pixels into superpixels, which are groups of pixels of homogenous appearance and undetermined pixels. However, two key issues remain to be addressed in designing a fuzzy superpixel algorithm for PolSAR image classification. First, the polarimetric scattering information, which is unique in PolSAR images, is not effectively used. Such information can be utilized to generate superpixels more suitable for PolSAR images. Second, the ratio of undetermined pixels is fixed for each image in the existing techniques, ignoring the fact that the difficulty of classifying different objects varies in an image. To address these two issues, we propose a polarimetric scattering information-based adaptive fuzzy superpixel (AFS) algorithm for PolSAR images classification. In AFS, the correlation between pixels’ polarimetric scattering information, for the first time, is considered through fuzzy rough set theory to generate superpixels. This correlation is further used to dynamically and adaptively update the ratio of undetermined pixels. AFS is evaluated extensively against different evaluation metrics and compared with the state-of-the-art superpixels’ algorithms on three PolSAR images. The experimental results demonstrate the superiority of AFS on PolSAR image classification problems
Breaking new ground in mapping human settlements from space -The Global Urban Footprint-
Today 7.2 billion people inhabit the Earth and by 2050 this number will have
risen to around nine billion, of which about 70 percent will be living in
cities. Hence, it is essential to understand drivers, dynamics, and impacts of
the human settlements development. A key component in this context is the
availability of an up-to-date and spatially consistent map of the location and
distribution of human settlements. It is here that the Global Urban Footprint
(GUF) raster map can make a valuable contribution. The new global GUF binary
settlement mask shows a so far unprecedented spatial resolution of 0.4 arcsec
() that provides - for the first time - a complete picture of the
entirety of urban and rural settlements. The GUF has been derived by means of a
fully automated processing framework - the Urban Footprint Processor (UFP) -
that was used to analyze a global coverage of more than 180,000 TanDEM-X and
TerraSAR-X radar images with 3m ground resolution collected in 2011-2012.
Various quality assessment studies to determine the absolute GUF accuracy based
on ground truth data on the one hand and the relative accuracies compared to
established settlements maps on the other hand, clearly indicate the added
value of the new global GUF layer, in particular with respect to the
representation of rural settlement patterns. Generally, the GUF layer achieves
an overall absolute accuracy of about 85\%, with observed minima around 65\%
and maxima around 98 \%. The GUF will be provided open and free for any
scientific use in the full resolution and for any non-profit (but also
non-scientific) use in a generalized version of 2.8 arcsec ().
Therewith, the new GUF layer can be expected to break new ground with respect
to the analysis of global urbanization and peri-urbanization patterns,
population estimation or vulnerability assessment
Radar satellite imagery for humanitarian response. Bridging the gap between technology and application
This work deals with radar satellite imagery and its potential to assist of humanitarian operations. As the number of displaced people annually increases, both hosting countries and relief organizations face new challenges which are often related to unclear situations and lack of information on the number and location of people in need, as well as their environments. It was demonstrated in numerous studies that methods of earth observation can deliver this important information for the management of crises, the organization of refugee camps, and the mapping of environmental resources and natural hazards. However, most of these studies make use of -high-resolution optical imagery, while the role of radar satellites is widely neglected. At the same time, radar sensors have characteristics which make them highly suitable for humanitarian response, their potential to capture images through cloud cover and at night in the first place. Consequently, they potentially allow quicker response in cases of emergencies than optical imagery.
This work demonstrates the currently unused potential of radar imagery for the assistance of humanitarian operations by case studies which cover the information needs of specific emergency situations. They are thematically grouped into topics related to population, natural hazards and the environment. Furthermore, the case studies address different levels of scientific objectives: The main intention is the development of innovative techniques of digital image processing and geospatial analysis as an answer on the identified existing research gaps. For this reason, novel approaches are presented on the mapping of refugee camps and urban areas, the allocation of biomass and environmental impact assessment. Secondly, existing methods developed for radar imagery are applied, refined, or adapted to specifically demonstrate their benefit in a humanitarian context. This is done for the monitoring of camp growth, the assessment of damages in cities affected by civil war, and the derivation of areas vulnerable to flooding or sea-surface changes. Lastly, to foster the integration of radar images into existing operational workflows of humanitarian data analysis, technically simple and easily-adaptable approaches are suggested for the mapping of rural areas for vaccination campaigns, the identification of changes within and around refugee camps, and the assessment of suitable locations for groundwater drillings.
While the studies provide different levels of technical complexity and novelty, they all show that radar imagery can largely contribute to the provision of a variety of information which is required to make solid decisions and to effectively provide help in humanitarian operations. This work furthermore demonstrates that radar images are more than just an alternative image source for areas heavily affected by cloud cover. In fact, what makes them valuable is their information content regarding the characteristics of surfaces, such as shape, orientation, roughness, size, height, moisture, or conductivity. All these give decisive insights about man-made and natural environments in emergency situations and cannot be provided by optical images
Finally, the findings of the case studies are put into a larger context, discussing the observed potential and limitations of the presented approaches. The major challenges are summarized which need be addressed to make radar imagery more useful in humanitarian operations in the context of upcoming technical developments. New radar satellites and technological progress in the fields of machine learning and cloud computing will bring new opportunities. At the same time, this work demonstrated the large need for further research, as well as for the collaboration and transfer of knowledge and experiences between scientists, users and relief workers in the field. It is the first extensive scientific compilation of this topic and the first step for a sustainable integration of radar imagery into operational frameworks to assist humanitarian work and to contribute to a more efficient provision of help to those in need.Die vorliegende Arbeit beschäftigt sich mit bildgebenden Radarsatelliten und ihrem potenziellen Beitrag zur Unterstützung humanitärer Einsätze. Die jährlich zunehmende Zahl an vertriebenen oder geflüchteten Menschen stellt sowohl Aufnahmeländer als auch humanitäre Organisationen vor große Herausforderungen, da sie oft mit unübersichtlichen Verhältnissen konfrontiert sind. Effektives Krisenmanagement, die Planung und Versorgung von Flüchtlingslagern, sowie der Schutz der betroffenen Menschen erfordern jedoch verlässliche Angaben über Anzahl und Aufenthaltsort der Geflüchteten und ihrer natürlichen Umwelt. Die Bereitstellung dieser Informationen durch Satellitenbilder wurde bereits in zahlreichen Studien aufgezeigt. Sie beruhen in der Regel auf hochaufgelösten optischen Aufnahmen, während bildgebende Radarsatelliten bisher kaum Anwendung finden. Dabei verfügen gerade Radarsatelliten über Eigenschaften, die hilfreich für humanitäre Einsätze sein können, allen voran ihre Unabhängigkeit von Bewölkung oder Tageslicht. Dadurch ermöglichen sie in Krisenfällen verglichen mit optischen Satelliten eine schnellere Reaktion.
Diese Arbeit zeigt das derzeit noch ungenutzte Potenzial von Radardaten zur Unterstützung humanitärer Arbeit anhand von Fallstudien auf, in denen konkrete Informationen für ausgewählte Krisensituationen bereitgestellt werden. Sie sind in die Themenbereiche Bevölkerung, Naturgefahren und Ressourcen aufgeteilt, adressieren jedoch unterschiedliche wissenschaftliche Ansprüche: Der Hauptfokus der Arbeit liegt auf der Entwicklung von innovativen Methoden zur Verarbeitung von Radarbildern und räumlichen Daten als Antwort auf den identifizierten Forschungsbedarf in diesem Gebiet. Dies wird anhand der Kartierung von Flüchtlingslagern zur Abschätzung ihrer Bevölkerung, zur Bestimmung von Biomasse, sowie zur Ermittlung des Umwelteinflusses von Flüchtlingslagern aufgezeigt. Darüber hinaus werden existierende oder erprobte Ansätze für die Anwendung im humanitären Kontext angepasst oder weiterentwickelt. Dies erfolgt im Rahmen von Fallstudien zur Dynamik von Flüchtlingslagern, zur Ermittlung von Schäden an Gebäuden in Kriegsgebieten, sowie zur Erkennung von Risiken durch Überflutung. Zuletzt soll die Integration von Radardaten in bereits existierende Abläufe oder Arbeitsroutinen in der humanitären Hilfe anhand technisch vergleichsweise einfacher Ansätze vorgestellt und angeregt werden. Als Beispiele dienen hier die radargestützte Kartierung von entlegenen Gebieten zur Unterstützung von Impfkampagnen, die Identifizierung von Veränderungen in Flüchtlingslagern, sowie die Auswahl geeigneter Standorte zur Grundwasserentnahme.
Obwohl sich die Fallstudien hinsichtlich ihres Innovations- und Komplexitätsgrads unterscheiden, zeigen sie alle den Mehrwert von Radardaten für die Bereitstellung von Informationen, um schnelle und fundierte Planungsentscheidungen zu unterstützen. Darüber hinaus wird in dieser Arbeit deutlich, dass Radardaten für humanitäre Zwecke mehr als nur eine Alternative in stark bewölkten Gebieten sind. Durch ihren Informationsgehalt zur Beschaffenheit von Oberflächen, beispielsweise hinsichtlich ihrer Rauigkeit, Feuchte, Form, Größe oder Höhe, sind sie optischen Daten überlegen und daher für viele Anwendungsbereiche im Kontext humanitärer Arbeit besonders.
Die in den Fallstudien gewonnenen Erkenntnisse werden abschließend vor dem Hintergrund von Vor- und Nachteilen von Radardaten, sowie hinsichtlich zukünftiger Entwicklungen und Herausforderungen diskutiert. So versprechen neue Radarsatelliten und technologische Fortschritte im Bereich der Datenverarbeitung großes Potenzial. Gleichzeitig unterstreicht die Arbeit einen großen Bedarf an weiterer Forschung, sowie an Austausch und Zusammenarbeit zwischen Wissenschaftlern, Anwendern und Einsatzkräften vor Ort. Die vorliegende Arbeit ist die erste umfassende Darstellung und wissenschaftliche Aufarbeitung dieses Themenkomplexes. Sie soll als Grundstein für eine langfristige Integration von Radardaten in operationelle Abläufe dienen, um humanitäre Arbeit zu unterstützen und eine wirksame Hilfe für Menschen in Not ermöglichen
- …