34 research outputs found

    Change Diffusion: Change Detection Map Generation Based on Difference-Feature Guided DDPM

    Full text link
    Deep learning (DL) approaches based on CNN-purely or Transformer networks have demonstrated promising results in bitemporal change detection (CD). However, their performance is limited by insufficient contextual information aggregation, as they struggle to fully capture the implicit contextual dependency relationships among feature maps at different levels. Additionally, researchers have utilized pre-trained denoising diffusion probabilistic models (DDPMs) for training lightweight CD classifiers. Nevertheless, training a DDPM to generate intricately detailed, multi-channel remote sensing images requires months of training time and a substantial volume of unlabeled remote sensing datasets, making it significantly more complex than generating a single-channel change map. To overcome these challenges, we propose a novel end-to-end DDPM-based model architecture called change-aware diffusion model (CADM), which can be trained using a limited annotated dataset quickly. Furthermore, we introduce dynamic difference conditional encoding to enhance step-wise regional attention in DDPM for bitemporal images in CD datasets. This method establishes state-adaptive conditions for each sampling step, emphasizing two main innovative points of our model: 1) its end-to-end nature and 2) difference conditional encoding. We evaluate CADM on four remote sensing CD tasks with different ground scenarios, including CDD, WHU, Levier, and GVLM. Experimental results demonstrate that CADM significantly outperforms state-of-the-art methods, indicating the generalization and effectiveness of the proposed model

    HED-UNet: Combined Segmentation and Edge Detection for Monitoring the Antarctic Coastline

    Full text link
    Deep learning-based coastline detection algorithms have begun to outshine traditional statistical methods in recent years. However, they are usually trained only as single-purpose models to either segment land and water or delineate the coastline. In contrast to this, a human annotator will usually keep a mental map of both segmentation and delineation when performing manual coastline detection. To take into account this task duality, we therefore devise a new model to unite these two approaches in a deep learning model. By taking inspiration from the main building blocks of a semantic segmentation framework (UNet) and an edge detection framework (HED), both tasks are combined in a natural way. Training is made efficient by employing deep supervision on side predictions at multiple resolutions. Finally, a hierarchical attention mechanism is introduced to adaptively merge these multiscale predictions into the final model output. The advantages of this approach over other traditional and deep learning-based methods for coastline detection are demonstrated on a dataset of Sentinel-1 imagery covering parts of the Antarctic coast, where coastline detection is notoriously difficult. An implementation of our method is available at \url{https://github.com/khdlr/HED-UNet}.Comment: This work has been accepted by IEEE TGRS for publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Predicting forest cover in distinct ecosystems: the potential of multi-source sentinel-1 and -2 data fusion

    Get PDF
    The fusion of microwave and optical data sets is expected to provide great potential for the derivation of forest cover around the globe. As Sentinel-1 and Sentinel-2 are now both operating in twin mode, they can provide an unprecedented data source to build dense spatial and temporal high-resolution time series across a variety of wavelengths. This study investigates (i) the ability of the individual sensors and (ii) their joint potential to delineate forest cover for study sites in two highly varied landscapes located in Germany (temperate dense mixed forests) and South Africa (open savanna woody vegetation and forest plantations). We used multi-temporal Sentinel-1 and single time steps of Sentinel-2 data in combination to derive accurate forest/non-forest (FNF) information via machine-learning classifiers. The forest classification accuracies were 90.9% and 93.2% for South Africa and Thuringia, respectively, estimated while using autocorrelation corrected spatial cross-validation (CV) for the fused data set. Sentinel-1 only classifications provided the lowest overall accuracy of 87.5%, while Sentinel-2 based classifications led to higher accuracies of 91.9%. Sentinel-2 short-wave infrared (SWIR) channels, biophysical parameters (Leaf Area Index (LAI), and Fraction of Absorbed Photosynthetically Active Radiation (FAPAR)) and the lower spectrum of the Sentinel-1 synthetic aperture radar (SAR) time series were found to be most distinctive in the detection of forest cover. In contrast to homogenous forests sites, Sentinel-1 time series information improved forest cover predictions in open savanna-like environments with heterogeneous regional features. The presented approach proved to be robust and it displayed the benefit of fusing optical and SAR data at high spatial resolution

    Deep Vision in Optical Imagery: From Perception to Reasoning

    Get PDF
    Deep learning has achieved extraordinary success in a wide range of tasks in computer vision field over the past years. Remote sensing data present different properties as compared to natural images/videos, due to their unique imaging technique, shooting angle, etc. For instance, hyperspectral images usually have hundreds of spectral bands, offering additional information, and the size of objects (e.g., vehicles) in remote sensing images is quite limited, which brings challenges for detection or segmentation tasks. This thesis focuses on two kinds of remote sensing data, namely hyper/multi-spectral and high-resolution images, and explores several methods to try to find answers to the following questions: - In comparison with natural images or videos in computer vision, the unique asset of hyper/multi-spectral data is their rich spectral information. But what this “additional” information brings for learning a network? And how do we take full advantage of these spectral bands? - Remote sensing images at high resolution have pretty different characteristics, bringing challenges for several tasks, for example, small object segmentation. Can we devise tailored networks for such tasks? - Deep networks have produced stunning results in a variety of perception tasks, e.g., image classification, object detection, and semantic segmentation. While the capacity to reason about relations over space is vital for intelligent species. Can a network/module with the capacity of reasoning benefit to parsing remote sensing data? To this end, a couple of networks are devised to figure out what a network learns from hyperspectral images and how to efficiently use spectral bands. In addition, a multi-task learning network is investigated for the instance segmentation of vehicles from aerial images and videos. Finally, relational reasoning modules are designed to improve semantic segmentation of aerial images

    Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources

    Get PDF
    Central to the looming paradigm shift toward data-intensive science, machine-learning techniques are becoming increasingly important. In particular, deep learning has proven to be both a major breakthrough and an extremely powerful tool in many fields. Shall we embrace deep learning as the key to everything? Or should we resist a black-box solution? These are controversial issues within the remote-sensing community. In this article, we analyze the challenges of using deep learning for remote-sensing data analysis, review recent advances, and provide resources we hope will make deep learning in remote sensing seem ridiculously simple. More importantly, we encourage remote-sensing scientists to bring their expertise into deep learning and use it as an implicit general model to tackle unprecedented, large-scale, influential challenges, such as climate change and urbanization

    Automatic near real-time flood detection in high resolution X-band synthetic aperture radar satellite data using context-based classification on irregular graphs

    Get PDF
    This thesis is an outcome of the project “Flood and damage assessment using very high resolution SAR data” (SAR-HQ), which is embedded in the interdisciplinary oriented RIMAX (Risk Management of Extreme Flood Events) programme, funded by the Federal Ministry of Education and Research (BMBF). It comprises the results of three scientific papers on automatic near real-time flood detection in high resolution X-band synthetic aperture radar (SAR) satellite data for operational rapid mapping activities in terms of disaster and crisis-management support. Flood situations seem to become more frequent and destructive in many regions of the world. A rising awareness of the availability of satellite based cartographic information has led to an increase in requests to corresponding mapping services to support civil-protection and relief organizations with disaster-related mapping and analysis activities. Due to the rising number of satellite systems with high revisit frequencies, a strengthened pool of SAR data is available during operational flood mapping activities. This offers the possibility to observe the whole extent of even large-scale flood events and their spatio-temporal evolution, but also calls for computationally efficient and automatic flood detection methods, which should drastically reduce the user input required by an active image interpreter. This thesis provides solutions for the near real-time derivation of detailed flood parameters such as flood extent, flood-related backscatter changes as well as flood classification probabilities from the new generation of high resolution X-band SAR satellite imagery in a completely unsupervised way. These data are, in comparison to images from conventional medium-resolution SAR sensors, characterized by an increased intra-class and decreased inter-class variability due to the reduced mixed pixel phenomenon. This problem is addressed by utilizing multi-contextual models on irregular hierarchical graphs, which consider that semantic image information is less represented in single pixels but in homogeneous image objects and their mutual relation. A hybrid Markov random field (MRF) model is developed, which integrates scale-dependent as well as spatio-temporal contextual information into the classification process by combining hierarchical causal Markov image modeling on automatically generated irregular hierarchical graphs with noncausal Markov modeling related to planar MRFs. This model is initialized in an unsupervised manner by an automatic tile-based thresholding approach, which solves the flood detection problem in large-size SAR data with small a priori class probabilities by statistical parameterization of local bi-modal class-conditional density functions in a time efficient manner. Experiments performed on TerraSAR-X StripMap data of Southwest England and ScanSAR data of north-eastern Namibia during large-scale flooding show the effectiveness of the proposed methods in terms of classification accuracy, computational performance, and transferability. It is further demonstrated that hierarchical causal Markov models such as hierarchical maximum a posteriori (HMAP) and hierarchical marginal posterior mode (HMPM) estimation can be effectively used for modeling the inter-spatial context of X-band SAR data in terms of flood and change detection purposes. Although the HMPM estimator is computationally more demanding than the HMAP estimator, it is found to be more suitable in terms of classification accuracy. Further, it offers the possibility to compute marginal posterior entropy-based confidence maps, which are used for the generation of flood possibility maps that express that the uncertainty in labeling of each image element. The supplementary integration of intra-spatial and, optionally, temporal contextual information into the Markov model results in a reduction of classification errors. It is observed that the application of the hybrid multi-contextual Markov model on irregular graphs is able to enhance classification results in comparison to modeling on regular structures of quadtrees, which is the hierarchical representation of images usually used in MRF-based image analysis. X-band SAR systems are generally not suited for detecting flooding under dense vegetation canopies such as forests due to the low capability of the X-band signal to penetrate into media. Within this thesis a method is proposed for the automatic derivation of flood areas beneath shrubs and grasses from TerraSAR-X data. Furthermore, an approach is developed, which combines high resolution topographic information with multi-scale image segmentation to enhance the mapping accuracy in areas consisting of flooded vegetation and anthropogenic objects as well as to remove non-water look-alike areas

    Domain Adaptation in remote sensing: increasing the portability of land-cover classifiers

    Get PDF
    Among the types of remote sensing acquisitions, optical images are certainly one of the most widely relied upon data sources for Earth observation. They provide detailed measurements of the electromagnetic radiation reflected or emitted by each pixel in the scene. Through a process termed supervised land-cover classification, this allows to automatically yet accurately distinguish objects at the surface of our planet. In this respect, when producing a land-cover map of the surveyed area, the availability of training examples representative of each thematic class is crucial for the success of the classification procedure. However, in real applications, due to several constraints on the sample collection process, labeled pixels are usually scarce. When analyzing an image for which those key samples are unavailable, a viable solution consists in resorting to the ground truth data of other previously acquired images. This option is attractive but several factors such as atmospheric, ground and acquisition conditions can cause radiometric differences between the images, hindering therefore the transfer of knowledge from one image to another. The goal of this Thesis is to supply remote sensing image analysts with suitable processing techniques to ensure a robust portability of the classification models across different images. The ultimate purpose is to map the land-cover classes over large spatial and temporal extents with minimal ground information. To overcome, or simply quantify, the observed shifts in the statistical distribution of the spectra of the materials, we study four approaches issued from the field of machine learning. First, we propose a strategy to intelligently sample the image of interest to collect the labels only in correspondence of the most useful pixels. This iterative routine is based on a constant evaluation of the pertinence to the new image of the initial training data actually belonging to a different image. Second, an approach to reduce the radiometric differences among the images by projecting the respective pixels in a common new data space is presented. We analyze a kernel-based feature extraction framework suited for such problems, showing that, after this relative normalization, the cross-image generalization abilities of a classifier are highly increased. Third, we test a new data-driven measure of distance between probability distributions to assess the distortions caused by differences in the acquisition geometry affecting series of multi-angle images. Also, we gauge the portability of classification models through the sequences. In both exercises, the efficacy of classic physically- and statistically-based normalization methods is discussed. Finally, we explore a new family of approaches based on sparse representations of the samples to reciprocally convert the data space of two images. The projection function bridging the images allows a synthesis of new pixels with more similar characteristics ultimately facilitating the land-cover mapping across images

    The Impact of Sensor Characteristics and Data Availability on Remote Sensing Based Change Detection

    Get PDF
    Land cover and land use change are among the major drivers of global change. In a time of mounting challenges for sustainable living on our planet any research benefits from interdisciplinary collaborations to gain an improved understanding of the human-environment system and to develop suitable and improve existing measures of natural resource management. This includes comprehensive understanding of land cover and land use changes, which is fundamental to mitigate global change. Remote sensing technology is essential for the analyses of the land surface (and hence related changes) because it offers cost-effective ways of collecting data simultaneously over large areas. With increasing variety of sensors and better data availability, the application of remote sensing as a means to assist in modeling, to support monitoring, and to detect changes at various spatial and temporal scales becomes more and more feasible. The relationship between the nature of the changes on the land surface, the sensor properties, and the conditions at the time of acquisition influences the potential and quality of land cover and land use change detection. Despite the wealth of existing change detection research, there is a need for new methodologies in order to efficiently explore the huge amount of data acquired by remote sensing systems with different sensor characteristics. The research of this thesis provides solutions to two main challenges of remote sensing based change detection. First, geometric effects and distortions occur when using data taken under different sun-target-sensor geometries. These effects mainly occur if sun position and/or viewing angles differ between images. This challenge was met by developing a theoretical framework of bi-temporal change detection scenarios. The concept includes the quantification of distortions that can occur in unfavorable situations. The invention and application of a new method – the Robust Change Vector Analysis (RCVA) – reduced the detection of false changes due to these distortions. The quality and robustness of the RCVA were demonstrated in an example of bi-temporal cross-sensor change detection in an urban environment in Cologne, Germany. Comparison with a state-of-the-art method showed better performance of RCVA and robustness against thresholding. Second, this thesis provides new insights into how to optimize the use of dense time series for forest cover change detection. A collection of spectral indices was reviewed for their suitability to display forest structure, development, and condition at a study site on Vancouver Island, British Columbia, Canada. The spatio-temporal variability of the indices was analyzed to identify those indices, which are considered most suitable for forest monitoring based on dense time series. Amongst the indices, the Disturbance Index (DI) was found to be sensitive to the state of the forest (i.e., forest structure). The Normalized Difference Moisture Index (NDMI) was found to be spatio-temporally stable and to be the most sensitive index for changes in forest condition. Both indices were successfully applied to detect abrupt forest cover changes. Further, this thesis demonstrated that relative radiometric normalization can obscure actual seasonal variation and long-term trends of spectral signals and is therefore not recommended to be incorporated in the time series pre-processing of remotely-sensed data. The main outcome of this part of the presented research is a new method for detecting discontinuities in time series of spectral indices. The method takes advantage of all available information in terms of cloud-free pixels and hence increases the number of observations compared to most existing methods. Also, the first derivative of the time series was identified (together with the discontinuity measure) as a suitable variable to display and quantify the dynamic of dense Landsat time series that cannot be observed with less dense time series. Given that these discontinuities are predominantly related to abrupt changes, the presented method was successfully applied to clearcut harvest detection. The presented method detected major events of forest change at unprecedented temporal resolution and with high accuracy (93% overall accuracy). This thesis contributes to improved understanding of bi-temporal change detection, addressing image artifacts that result from flexible acquisition features of modern satellites (e.g., off-nadir capabilities). The demonstrated ability to efficiently analyze cross-sensor data and data taken under unfavorable conditions is increasingly important for the detection of many rapid changes, e.g., to assist in emergency response. This thesis further contributes to the optimized use of remotely sensed time series for improving the understanding, accuracy, and reliability of forest cover change detection. Additionally, the thesis demonstrates the usability of and also the necessity for continuity in medium spatial resolution satellite imagery, such as the Landsat data, for forest management. Constellations of recently launched (e.g., Landsat 8 OLI) and upcoming sensors (e.g., Sentinel-2) will deliver new opportunities to apply and extend the presented methodologies.Der Einfluss von Sensorcharakteristik und Datenverfügbarkeit auf die fernerkundungsbasierte Veränderungsdetektion Landbedeckungs- und Landnutzungswandel gehören zu den Haupttriebkräften des Globalen Wandels. In einer Zeit, in der ein nachhaltiges Leben auf unserem Planeten zu einer wachsenden Herausforderung wird, profitiert die Wissenschaft von interdisziplinärer Zusammenarbeit, um ein besseres Verständnis der Mensch-Umwelt-Beziehungen zu erlangen und um verbesserte Maßnahmen des Ressourcenmanagements zu entwickeln. Dazu gehört auch ein erweitertes Verständnis von Landbedeckungs- und Landnutzungswandel, das elementar ist, um dem Globalen Wandel zu begegnen. Die Fernerkundungstechnologie ist grundlegend für die Analyse der Landoberfläche und damit verknüpften Veränderungen, weil sie in der Lage ist, große Flächen gleichzeitig zu erfassen. Mit zunehmender Sensorenvielfalt und besserer Datenverfügbarkeit gewinnt Fernerkundung bei der Modellierung, beim Monitoring sowie als Mittel zur Erkennung von Veränderungen in verschiedenen räumlichen und zeitlichen Skalen zunehmend an Bedeutung. Das Wirkungsgeflecht zwischen der Art von Veränderungen der Landoberfläche, Sensoreigenschaften und Aufnahmebedingungen beeinflusst das Potenzial und die Qualität fernerkundungsbasierter Landbedeckungs- und Landnutzungsveränderungs-detektion. Trotz der Fülle an bestehenden Forschungsleistungen zur Veränderungsdetektion besteht ein dringender Bedarf an neuen Methoden, die geeignet sind, das große Aufkommen von Daten unterschiedlicher Sensoren effizient zu nutzen. Die in dieser Abschlussarbeit durchgeführte Forschung befasst sich mit zwei aktuellen Problemfeldern der fernerkundungsbasierten Veränderungsdetektion. Das erste sind die geometrischen Effekte und Verzerrungen, die auftreten, wenn Daten genutzt werden, die unter verschiedenen Sonne-Zielobjekt-Sensor-Geometrien aufgenommen wurden. Diese Effekte treten vor allem dann auf, wenn unterschiedliche Sonnenstände und/oder unterschiedliche Einfallswinkel der Satelliten genutzt werden. Der Herausforderung wurde begegnet, indem ein theoretisches Konzept von Szenarien dargelegt wurde, die bei der bi-temporalen Veränderungsdetektion auftreten können. Das Konzept beinhaltet die Quantifizierung der Verzerrungen, die in ungünstigen Fällen auftreten können. Um die Falscherkennung von Veränderungen in Folge der resultierenden Verzerrungen zu reduzieren, wurde eine neue Methode entwickelt – die Robust Change Vector Analysis (RCVA). Die Qualität der Methode wird an einem Beispiel der Veränderungsdetektion im urbanen Raum (Köln, Deutschland) aufgezeigt. Ein Vergleich mit einer anderen gängigen Methode zeigt bessere Ergebnisse für die neue RCVA und untermauert deren Robustheit gegenüber der Schwellenwertbestimmung. Die zweite Herausforderung, mit der sich die vorliegende Arbeit befasst, betrifft die optimierte Nutzung von dichten Zeitreihen zur Veränderungsdetektion von Wäldern. Eine Auswahl spektraler Indizes wurde hinsichtlich ihrer Tauglichkeit zur Erfassung von Waldstruktur, Waldentwicklung und Waldzustand in einem Untersuchungsgebiet auf Vancouver Island, British Columbia, Kanada, bewertet. Um die Einsatzmöglichkeiten der Indizes für dichte Zeitreihen bewerten zu können, wurde ihre raum-zeitliche Variabilität untersucht. Der Disturbance Index (DI) ist ein Index, der sensitiv für das Stadium eines Waldes ist (d. h. seine Struktur). DerNormalized Difference Moisture Index (NDMI) ist raum-zeitlich stabil und zudem am sensitivsten für Veränderungen des Waldzustands. Beide Indizes wurden erfolgreich zur Erkennung von abrupten Veränderungen getestet. In der vorliegenden Arbeit wird aufgezeigt, dass die relative radiometrische Normierung saisonale Variabilität und Langzeittrends von Zeitreihen spektraler Signale verzerrt. Die relative radiometrische Normierung wird daher nicht zur Vorprozessierung von Fernerkundungszeitreihen empfohlen. Das wichtigste Ergebnis dieser Studie ist eine neue Methode zur Erkennung von Diskontinuitäten in Zeitreihen spektraler Indizes. Die Methode nutzt alle wolkenfreien, ungestörten Beobachtungen (d. h. unabhängig von der Gesamtbewölkung in einem Bild) in einer Zeitreihe und erhöht dadurch die Anzahl an Beobachtungen im Vergleich zu anderen Methoden. Die erste Ableitung und die Messgröße zur Erfassung der Diskontinuitäten sind gut geeignet, um die Dynamik dichter Zeitreihen zu beschreiben und zu quantifizieren. Dies ist mit weniger dichten Zeitreihen nicht möglich. Da diese Diskontinuitäten im Untersuchungsgebiet üblicherweise abrupter Natur sind, ist die Methode gut geeignet, um Kahlschläge zu erfassen. Die hier dargelegte neue Methode detektiert Waldbedeckungsveränderungen mit einzigartiger zeitlicher Auflösung und hoher Genauigkeit (93% Gesamtgenauigkeit). Die vorliegende Arbeit trägt zu einem verbesserten Verständnis bi-temporaler Veränderungsdetektion bei, indem Bildartefakte berücksichtigt werden, die infolge der Flexibilität moderner Sensoren entstehen können. Die dargestellte Möglichkeit, Daten zu analysieren, die von unterschiedlichen Sensoren stammen und die unter ungünstigen Bedingungen aufgenommen wurden, wird zukünftig bei der Erfassung von schnellen Veränderungen an Bedeutung gewinnen, z. B. bei Katastropheneinsätzen. Ein weiterer Beitrag der vorliegenden Arbeit liegt in der optimierten Anwendung von Fernerkundungszeitreihen zur Verbesserung von Verständnis, Genauigkeit und Verlässlichkeit der Waldveränderungsdetektion. Des Weiteren zeigt die Arbeit den Nutzen und die Notwendigkeit der Fortführung von Satellitendaten mit mittlerer Auflösung (z. B. Landsat) für das Waldmanagement. Konstellationen kürzlich gestarteter (z. B. Landsat 8 OLI) und zukünftiger Sensoren (z. B. Sentinel-2) werden neue Möglichkeiten zur Anwendung und Optimierung der hier vorgestellten Methoden bieten

    Building change detection from remotely sensed data using machine learning techniques

    Full text link
    As remote sensing data plays an increasingly important role in many fields, many countries have established geographic information systems. However, such systems usually suffer from obsolete scene details, making the development of change detection technology critical. Building changes are important in practice, as they are valuable in urban planning and disaster rescue. This thesis focuses on building change detection from remotely sensed data using machine learning techniques. Supervised classification is a traditional method for pixel level change detection, and relies on a suitable training dataset. Since different training datasets may affect the learning performance differently, the effects of dataset characteristics on pixel level building change detection are first studied. The research is conducted from two angles, namely the imbalance and noise in the training dataset, and multiple correlations among different features. The robustness of some supervised learning algorithms to unbalanced and noisy training datasets is examined, and the results are interpreted from a theoretical perspective. A solution for handling multiple correlations is introduced, and its performance on and applicability to building change detection is investigated. Finally, an object-based post processing technique is proposed using prior knowledge to further suppress false alarms. A novel corner based Markov random field (MRF) method is then proposed for exploring spatial information and contextual relations in changed building outline detection. Corners are treated as vertices in the graph, and a new method is proposed for determining neighbourhood relations. Energy terms in the proposed method are constructed using spatial features to describe building characteristics. An optimal solution indicates spatial features belonging to changed buildings, and changed areas are revealed based on novel linking processes. Considering the individual advantages of pixel level, contextual and spatial features, an MRF based combinational method is proposed that exploits spectral, spatial and contextual features in building change detection. It consists of pixel level detection and corner based refinement. Pixel level detection is first conducted, which provides an initial indication of changed areas. Corner based refinement is then implemented to further refine the detection results. Experimental results and quantitative analysis demonstrate the capacity and effectiveness of the proposed methods

    Synthetic Aperture Radar (SAR) Meets Deep Learning

    Get PDF
    This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports
    corecore