6 research outputs found

    deSpeckNet: Generalizing Deep Learning Based SAR Image Despeckling

    Full text link
    Deep learning (DL) has proven to be a suitable approach for despeckling synthetic aperture radar (SAR) images. So far, most DL models are trained to reduce speckle that follows a particular distribution, either using simulated noise or a specific set of real SAR images, limiting the applicability of these methods for real SAR images with unknown noise statistics. In this paper, we present a DL method, deSpeckNet1, that estimates the speckle noise distribution and the despeckled image simultaneously. Since it does not depend on a specific noise model, deSpeckNet generalizes well across SAR acquisitions in a variety of landcover conditions. We evaluated the performance of deSpeckNet on single polarized Sentinel-1 images acquired in Indonesia, The Democratic Republic of Congo and The Netherlands, a single polarized ALOS-2/PALSAR-2 image acquired in Japan and an Iceye X2 image acquired in Germany. In all cases, deSpeckNet was able to effectively reduce speckle and restor

    Multi-Objective CNN Based Algorithm for SAR Despeckling

    Full text link
    Deep learning (DL) in remote sensing has nowadays become an effective operative tool: it is largely used in applications such as change detection, image restoration, segmentation, detection and classification. With reference to synthetic aperture radar (SAR) domain the application of DL techniques is not straightforward due to non trivial interpretation of SAR images, specially caused by the presence of speckle. Several deep learning solutions for SAR despeckling have been proposed in the last few years. Most of these solutions focus on the definition of different network architectures with similar cost functions not involving SAR image properties. In this paper, a convolutional neural network (CNN) with a multi-objective cost function taking care of spatial and statistical properties of the SAR image is proposed. This is achieved by the definition of a peculiar loss function obtained by the weighted combination of three different terms. Each of this term is dedicated mainly to one of the following SAR image characteristics: spatial details, speckle statistical properties and strong scatterers identification. Their combination allows to balance these effects. Moreover, a specifically designed architecture is proposed for effectively extract distinctive features within the considered framework. Experiments on simulated and real SAR images show the accuracy of the proposed method compared to the State-of-Art despeckling algorithms, both from quantitative and qualitative point of view. The importance of considering such SAR properties in the cost function is crucial for a correct noise rejection and details preservation in different underlined scenarios, such as homogeneous, heterogeneous and extremely heterogeneous

    MuLoG, or How to apply Gaussian denoisers to multi-channel SAR speckle reduction?

    No full text
    International audienceSpeckle reduction is a longstanding topic in synthetic aperture radar (SAR) imaging. Since most current and planned SAR imaging satellites operate in polarimetric, interferometric or tomographic modes, SAR images are multi-channel and speckle reduction techniques must jointly process all channels to recover polarimetric and interferometric information. The distinctive nature of SAR signal (complex-valued, corrupted by multiplicative fluctuations) calls for the development of specialized methods for speckle reduction. Image denoising is a very active topic in image processing with a wide variety of approaches and many denoising algorithms available, almost always designed for additive Gaussian noise suppression. This paper proposes a general scheme, called MuLoG (MUlti-channel LOgarithm with Gaussian denoising), to include such Gaussian denoisers within a multi-channel SAR speckle reduction technique. A new family of speckle reduction algorithms can thus be obtained, benefiting from the ongoing progress in Gaussian denoising, and offering several speckle reduction results often displaying method-specific artifacts that can be dismissed by comparison between results

    Image Restoration for Remote Sensing: Overview and Toolbox

    Full text link
    Remote sensing provides valuable information about objects or areas from a distance in either active (e.g., RADAR and LiDAR) or passive (e.g., multispectral and hyperspectral) modes. The quality of data acquired by remotely sensed imaging sensors (both active and passive) is often degraded by a variety of noise types and artifacts. Image restoration, which is a vibrant field of research in the remote sensing community, is the task of recovering the true unknown image from the degraded observed image. Each imaging sensor induces unique noise types and artifacts into the observed image. This fact has led to the expansion of restoration techniques in different paths according to each sensor type. This review paper brings together the advances of image restoration techniques with particular focuses on synthetic aperture radar and hyperspectral images as the most active sub-fields of image restoration in the remote sensing community. We, therefore, provide a comprehensive, discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to investigate the vibrant topic of data restoration by supplying sufficient detail and references. Additionally, this review paper accompanies a toolbox to provide a platform to encourage interested students and researchers in the field to further explore the restoration techniques and fast-forward the community. The toolboxes are provided in https://github.com/ImageRestorationToolbox.Comment: This paper is under review in GRS

    InSAR Deformation Analysis with Distributed Scatterers: A Review Complemented by New Advances

    Get PDF
    Interferometric Synthetic Aperture Radar (InSAR) is a powerful remote sensing technique able to measure deformation of the earth’s surface over large areas. InSAR deformation analysis uses two main categories of backscatter: Persistent Scatterers (PS) and Distributed Scatterers (DS). While PS are characterized by a high signal-to-noise ratio and predominantly occur as single pixels, DS possess a medium or low signal-to-noise ratio and can only be exploited if they form homogeneous groups of pixels that are large enough to allow for statistical analysis. Although DS have been used by InSAR since its beginnings for different purposes, new methods developed during the last decade have advanced the field significantly. Preprocessing of DS with spatio-temporal filtering allows today the use of DS in PS algorithms as if they were PS, thereby enlarging spatial coverage and stabilizing algorithms. This review explores the relations between different lines of research and discusses open questions regarding DS preprocessing for deformation analysis. The review is complemented with an experiment that demonstrates that significantly improved results can be achieved for preprocessed DS during parameter estimation if their statistical properties are used
    corecore