89 research outputs found

    A General Destriping Framework for Remote Sensing Images Using Flatness Constraint

    Full text link
    This paper proposes a general destriping framework using flatness constraints, where we can handle various regularization functions in a unified manner. Removing stripe noise, i.e., destriping, from remote sensing images is an essential task in terms of visual quality and subsequent processing. Most of the existing methods are designed by combining a particular image regularization with a stripe noise characterization that cooperates with the regularization, which precludes us to examine different regularizations to adapt to various target images. To resolve this, we formulate the destriping problem as a convex optimization problem involving a general form of image regularization and the flatness constraints, a newly introduced stripe noise characterization. This strong characterization enables us to consistently capture the nature of stripe noise, regardless of the choice of image regularization. For solving the optimization problem, we also develop an efficient algorithm based on a diagonally preconditioned primal-dual splitting algorithm (DP-PDS), which can automatically adjust the stepsizes. The effectiveness of our framework is demonstrated through destriping experiments, where we comprehensively compare combinations of image regularizations and stripe noise characterizations using hyperspectral images (HSI) and infrared (IR) videos.Comment: submitted to IEEE Transactions on Geoscience and Remote Sensin

    Adaptive Regularized Low-Rank Tensor Decomposition for Hyperspectral Image Denoising and Destriping

    Full text link
    Hyperspectral images (HSIs) are inevitably degraded by a mixture of various types of noise, such as Gaussian noise, impulse noise, stripe noise, and dead pixels, which greatly limits the subsequent applications. Although various denoising methods have already been developed, accurately recovering the spatial-spectral structure of HSIs remains a challenging problem to be addressed. Furthermore, serious stripe noise, which is common in real HSIs, is still not fully separated by the previous models. In this paper, we propose an adaptive hyperLaplacian regularized low-rank tensor decomposition (LRTDAHL) method for HSI denoising and destriping. On the one hand, the stripe noise is separately modeled by the tensor decomposition, which can effectively encode the spatial-spectral correlation of the stripe noise. On the other hand, adaptive hyper-Laplacian spatial-spectral regularization is introduced to represent the distribution structure of different HSI gradient data by adaptively estimating the optimal hyper-Laplacian parameter, which can reduce the spatial information loss and over-smoothing caused by the previous total variation regularization. The proposed model is solved using the alternating direction method of multipliers (ADMM) algorithm. Extensive simulation and real-data experiments all demonstrate the effectiveness and superiority of the proposed method

    Image Restoration for Remote Sensing: Overview and Toolbox

    Full text link
    Remote sensing provides valuable information about objects or areas from a distance in either active (e.g., RADAR and LiDAR) or passive (e.g., multispectral and hyperspectral) modes. The quality of data acquired by remotely sensed imaging sensors (both active and passive) is often degraded by a variety of noise types and artifacts. Image restoration, which is a vibrant field of research in the remote sensing community, is the task of recovering the true unknown image from the degraded observed image. Each imaging sensor induces unique noise types and artifacts into the observed image. This fact has led to the expansion of restoration techniques in different paths according to each sensor type. This review paper brings together the advances of image restoration techniques with particular focuses on synthetic aperture radar and hyperspectral images as the most active sub-fields of image restoration in the remote sensing community. We, therefore, provide a comprehensive, discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to investigate the vibrant topic of data restoration by supplying sufficient detail and references. Additionally, this review paper accompanies a toolbox to provide a platform to encourage interested students and researchers in the field to further explore the restoration techniques and fast-forward the community. The toolboxes are provided in https://github.com/ImageRestorationToolbox.Comment: This paper is under review in GRS

    Robust Constrained Hyperspectral Unmixing Using Reconstructed-Image Regularization

    Full text link
    Hyperspectral (HS) unmixing is the process of decomposing an HS image into material-specific spectra (endmembers) and their spatial distributions (abundance maps). Existing unmixing methods have two limitations with respect to noise robustness. First, if the input HS image is highly noisy, even if the balance between sparse and piecewise-smooth regularizations for abundance maps is carefully adjusted, noise may remain in the estimated abundance maps or undesirable artifacts may appear. Second, existing methods do not explicitly account for the effects of stripe noise, which is common in HS measurements, in their formulations, resulting in significant degradation of unmixing performance when such noise is present in the input HS image. To overcome these limitations, we propose a new robust hyperspectral unmixing method based on constrained convex optimization. Our method employs, in addition to the two regularizations for the abundance maps, regularizations for the HS image reconstructed by mixing the estimated abundance maps and endmembers. This strategy makes the unmixing process much more robust in highly-noisy scenarios, under the assumption that the abundance maps used to reconstruct the HS image with desirable spatio-spectral structure are also expected to have desirable properties. Furthermore, our method is designed to accommodate a wider variety of noise including stripe noise. To solve the formulated optimization problem, we develop an efficient algorithm based on a preconditioned primal-dual splitting method, which can automatically determine appropriate stepsizes based on the problem structure. Experiments on synthetic and real HS images demonstrate the advantages of our method over existing methods.Comment: Submitted to IEEE Transactions on Geoscience and Remote Sensin

    An inexact proximal majorization-minimization Algorithm for remote sensing image stripe noise removal

    Full text link
    The stripe noise existing in remote sensing images badly degrades the visual quality and restricts the precision of data analysis. Therefore, many destriping models have been proposed in recent years. In contrast to these existing models, in this paper, we propose a nonconvex model with a DC function (i.e., the difference of convex functions) structure to remove the strip noise. To solve this model, we make use of the DC structure and apply an inexact proximal majorization-minimization algorithm with each inner subproblem solved by the alternating direction method of multipliers. It deserves mentioning that we design an implementable stopping criterion for the inner subproblem, while the convergence can still be guaranteed. Numerical experiments demonstrate the superiority of the proposed model and algorithm.Comment: 19 pages, 3 figure

    Column-Spatial Correction Network for Remote Sensing Image Destriping

    Get PDF
    The stripe noise in the multispectral remote sensing images, possibly resulting from the instrument instability, slit contamination, and light interference, significantly degrades the imaging quality and impairs high-level visual tasks. The local consistency of homogeneous region in striped images is damaged because of the different gains and offsets of adjacent sensors regarding the same ground object, which leads to the structural characteristics of stripe noise. This can be characterized by the increased differences between columns in the remote sensing image. Therefore, the destriping can be viewed as a process of improving the local consistency of homogeneous region and the global uniformity of whole image. In recent years, convolutional neural network (CNN)-based models have been introduced to destriping tasks, and have achieved advanced results, relying on their powerful representation ability. Therefore, to effectively leverage both CNNs and the structural characteristics of stripe noise, we propose a multi-scaled column-spatial correction network (CSCNet) for remote sensing image destriping, in which the local structural characteristic of stripe noise and the global contextual information of the image are both explored at multiple feature scales. More specifically, the column-based correction module (CCM) and spatial-based correction module (SCM) were designed to improve the local consistency and global uniformity from the perspectives of column correction and full image correction, respectively. Moreover, a feature fusion module based on the channel attention mechanism was created to obtain discriminative features derived from different modules and scales. We compared the proposed model against both traditional and deep learning methods on simulated and real remote sensing images. The promising results indicate that CSCNet effectively removes image stripes and outperforms state-of-the-art methods in terms of qualitative and quantitative assessments

    Multi-scale Adaptive Fusion Network for Hyperspectral Image Denoising

    Full text link
    Removing the noise and improving the visual quality of hyperspectral images (HSIs) is challenging in academia and industry. Great efforts have been made to leverage local, global or spectral context information for HSI denoising. However, existing methods still have limitations in feature interaction exploitation among multiple scales and rich spectral structure preservation. In view of this, we propose a novel solution to investigate the HSI denoising using a Multi-scale Adaptive Fusion Network (MAFNet), which can learn the complex nonlinear mapping between clean and noisy HSI. Two key components contribute to improving the hyperspectral image denoising: A progressively multiscale information aggregation network and a co-attention fusion module. Specifically, we first generate a set of multiscale images and feed them into a coarse-fusion network to exploit the contextual texture correlation. Thereafter, a fine fusion network is followed to exchange the information across the parallel multiscale subnetworks. Furthermore, we design a co-attention fusion module to adaptively emphasize informative features from different scales, and thereby enhance the discriminative learning capability for denoising. Extensive experiments on synthetic and real HSI datasets demonstrate that the proposed MAFNet has achieved better denoising performance than other state-of-the-art techniques. Our codes are available at \verb'https://github.com/summitgao/MAFNet'.Comment: IEEE JSTASRS 2023, code at: https://github.com/summitgao/MAFNe
    corecore