11 research outputs found

    Image Restoration for Remote Sensing: Overview and Toolbox

    Full text link
    Remote sensing provides valuable information about objects or areas from a distance in either active (e.g., RADAR and LiDAR) or passive (e.g., multispectral and hyperspectral) modes. The quality of data acquired by remotely sensed imaging sensors (both active and passive) is often degraded by a variety of noise types and artifacts. Image restoration, which is a vibrant field of research in the remote sensing community, is the task of recovering the true unknown image from the degraded observed image. Each imaging sensor induces unique noise types and artifacts into the observed image. This fact has led to the expansion of restoration techniques in different paths according to each sensor type. This review paper brings together the advances of image restoration techniques with particular focuses on synthetic aperture radar and hyperspectral images as the most active sub-fields of image restoration in the remote sensing community. We, therefore, provide a comprehensive, discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to investigate the vibrant topic of data restoration by supplying sufficient detail and references. Additionally, this review paper accompanies a toolbox to provide a platform to encourage interested students and researchers in the field to further explore the restoration techniques and fast-forward the community. The toolboxes are provided in https://github.com/ImageRestorationToolbox.Comment: This paper is under review in GRS

    Synthetic Aperture Radar (SAR) Meets Deep Learning

    Get PDF
    This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports

    Deep Learning based data-fusion methods for remote sensing applications

    Get PDF
    In the last years, an increasing number of remote sensing sensors have been launched to orbit around the Earth, with a continuously growing production of massive data, that are useful for a large number of monitoring applications, especially for the monitoring task. Despite modern optical sensors provide rich spectral information about Earth's surface, at very high resolution, they are weather-sensitive. On the other hand, SAR images are always available also in presence of clouds and are almost weather-insensitive, as well as daynight available, but they do not provide a rich spectral information and are severely affected by speckle "noise" that make difficult the information extraction. For the above reasons it is worth and challenging to fuse data provided by different sources and/or acquired at different times, in order to leverage on their diversity and complementarity to retrieve the target information. Motivated by the success of the employment of Deep Learning methods in many image processing tasks, in this thesis it has been faced different typical remote sensing data-fusion problems by means of suitably designed Convolutional Neural Networks

    A Sparsity-Based InSAR Phase Denoising Algorithm Using Nonlocal Wavelet Shrinkage

    Get PDF
    An interferometric synthetic aperture radar (InSAR) phase denoising algorithm using the local sparsity of wavelet coefficients and nonlocal similarity of grouped blocks was developed. From the Bayesian perspective, the double-l1 norm regularization model that enforces the local and nonlocal sparsity constraints was used. Taking advantages of coefficients of the nonlocal similarity between group blocks for the wavelet shrinkage, the proposed algorithm effectively filtered the phase noise. Applying the method to simulated and acquired InSAR data, we obtained satisfactory results. In comparison, the algorithm outperformed several widely-used InSAR phase denoising approaches in terms of the number of residues, root-mean-square errors and other edge preservation indexes

    Advanced machine learning algorithms for Canadian wetland mapping using polarimetric synthetic aperture radar (PolSAR) and optical imagery

    Get PDF
    Wetlands are complex land cover ecosystems that represent a wide range of biophysical conditions. They are one of the most productive ecosystems and provide several important environmental functionalities. As such, wetland mapping and monitoring using cost- and time-efficient approaches are of great interest for sustainable management and resource assessment. In this regard, satellite remote sensing data are greatly beneficial, as they capture a synoptic and multi-temporal view of landscapes. The ability to extract useful information from satellite imagery greatly affects the accuracy and reliability of the final products. This is of particular concern for mapping complex land cover ecosystems, such as wetlands, where complex, heterogeneous, and fragmented landscape results in similar backscatter/spectral signatures of land cover classes in satellite images. Accordingly, the overarching purpose of this thesis is to contribute to existing methodologies of wetland classification by proposing and developing several new techniques based on advanced remote sensing tools and optical and Synthetic Aperture Radar (SAR) imagery. Specifically, the importance of employing an efficient speckle reduction method for polarimetric SAR (PolSAR) image processing is discussed and a new speckle reduction technique is proposed. Two novel techniques are also introduced for improving the accuracy of wetland classification. In particular, a new hierarchical classification algorithm using multi-frequency SAR data is proposed that discriminates wetland classes in three steps depending on their complexity and similarity. The experimental results reveal that the proposed method is advantageous for mapping complex land cover ecosystems compared to single stream classification approaches, which have been extensively used in the literature. Furthermore, a new feature weighting approach is proposed based on the statistical and physical characteristics of PolSAR data to improve the discrimination capability of input features prior to incorporating them into the classification scheme. This study also demonstrates the transferability of existing classification algorithms, which have been developed based on RADARSAT-2 imagery, to compact polarimetry SAR data that will be collected by the upcoming RADARSAT Constellation Mission (RCM). The capability of several well-known deep Convolutional Neural Network (CNN) architectures currently employed in computer vision is first introduced in this thesis for classification of wetland complexes using multispectral remote sensing data. Finally, this research results in the first provincial-scale wetland inventory maps of Newfoundland and Labrador using the Google Earth Engine (GEE) cloud computing resources and open access Earth Observation (EO) collected by the Copernicus Sentinel missions. Overall, the methodologies proposed in this thesis address fundamental limitations/challenges of wetland mapping using remote sensing data, which have been ignored in the literature. These challenges include the backscattering/spectrally similar signature of wetland classes, insufficient classification accuracy of wetland classes, and limitations of wetland mapping on large scales. In addition to the capabilities of the proposed methods for mapping wetland complexes, the use of these developed techniques for classifying other complex land cover types beyond wetlands, such as sea ice and crop ecosystems, offers a potential avenue for further research

    Wetland mapping and monitoring using polarimetric and interferometric synthetic aperture radar (SAR) data and tools

    Get PDF
    Wetlands are home to a great variety of flora and fauna species and provide several unique environmental functions, such as controlling floods, improving water-quality, supporting wildlife habitat, and shoreline stabilization. Detailed information on spatial distribution of wetland classes is crucial for sustainable management and resource assessment. Furthermore, hydrological monitoring of wetlands is also important for maintaining and preserving the habitat of various plant and animal species. This thesis investigates the existing knowledge and technological challenges associated with wetland mapping and monitoring and evaluates the limitations of the methodologies that have been developed to date. The study also proposes new methods to improve the characterization of these productive ecosystems using advanced remote sensing (RS) tools and data. Specifically, a comprehensive literature review on wetland monitoring using Synthetic Aperture Radar (SAR) and Interferometric SAR (InSAR) techniques is provided. The application of the InSAR technique for wetland mapping provides the following advantages: (i) the high sensitivity of interferometric coherence to land cover changes is taken into account and (ii) the exploitation of interferometric coherence for wetland classification further enhances the discrimination between similar wetland classes. A statistical analysis of the interferometric coherence and SAR backscattering variation of Canadian wetlands, which are ignored in the literature, is carried out using multi-temporal, multi-frequency, and multi-polarization SAR data. The study also examines the capability of compact polarimetry (CP) SAR data, which will be collected by the upcoming RADARSAT Constellation Mission (RCM) and will constitute the main source of SAR observation in Canada, for wetland mapping. The research in this dissertation proposes a methodology for wetland classification using the synergistic use of intensity, polarimetry, and interferometry features using a novel classification framework. Finally, this work introduces a novel model based on the deep convolutional neural network (CNN) for wetland classification that can be trained in an end-to-end scheme and is specifically designed for the classification of wetland complexes using polarimetric SAR (PolSAR) imagery. The results of the proposed methods are promising and will significantly contribute to the ongoing efforts of conservation strategies for wetlands and monitoring changes. The approaches presented in this thesis serve as frameworks, progressing towards an operational methodology for mapping wetland complexes in Canada, as well as other wetlands worldwide with similar ecological characteristics

    A Deep Learning Framework in Selected Remote Sensing Applications

    Get PDF
    The main research topic is designing and implementing a deep learning framework applied to remote sensing. Remote sensing techniques and applications play a crucial role in observing the Earth evolution, especially nowadays, where the effects of climate change on our life is more and more evident. A considerable amount of data are daily acquired all over the Earth. Effective exploitation of this information requires the robustness, velocity and accuracy of deep learning. This emerging need inspired the choice of this topic. The conducted studies mainly focus on two European Space Agency (ESA) missions: Sentinel 1 and Sentinel 2. Images provided by the ESA Sentinel-2 mission are rapidly becoming the main source of information for the entire remote sensing community, thanks to their unprecedented combination of spatial, spectral and temporal resolution, as well as their open access policy. The increasing interest gained by these satellites in the research laboratory and applicative scenarios pushed us to utilize them in the considered framework. The combined use of Sentinel 1 and Sentinel 2 is crucial and very prominent in different contexts and different kinds of monitoring when the growing (or changing) dynamics are very rapid. Starting from this general framework, two specific research activities were identified and investigated, leading to the results presented in this dissertation. Both these studies can be placed in the context of data fusion. The first activity deals with a super-resolution framework to improve Sentinel 2 bands supplied at 20 meters up to 10 meters. Increasing the spatial resolution of these bands is of great interest in many remote sensing applications, particularly in monitoring vegetation, rivers, forests, and so on. The second topic of the deep learning framework has been applied to the multispectral Normalized Difference Vegetation Index (NDVI) extraction, and the semantic segmentation obtained fusing Sentinel 1 and S2 data. The S1 SAR data is of great importance for the quantity of information extracted in the context of monitoring wetlands, rivers and forests, and many other contexts. In both cases, the problem was addressed with deep learning techniques, and in both cases, very lean architectures were used, demonstrating that even without the availability of computing power, it is possible to obtain high-level results. The core of this framework is a Convolutional Neural Network (CNN). {CNNs have been successfully applied to many image processing problems, like super-resolution, pansharpening, classification, and others, because of several advantages such as (i) the capability to approximate complex non-linear functions, (ii) the ease of training that allows to avoid time-consuming handcraft filter design, (iii) the parallel computational architecture. Even if a large amount of "labelled" data is required for training, the CNN performances pushed me to this architectural choice.} In our S1 and S2 integration task, we have faced and overcome the problem of manually labelled data with an approach based on integrating these two different sensors. Therefore, apart from the investigation in Sentinel-1 and Sentinel-2 integration, the main contribution in both cases of these works is, in particular, the possibility of designing a CNN-based solution that can be distinguished by its lightness from a computational point of view and consequent substantial saving of time compared to more complex deep learning state-of-the-art solutions
    corecore