39 research outputs found
Image Restoration for Remote Sensing: Overview and Toolbox
Remote sensing provides valuable information about objects or areas from a
distance in either active (e.g., RADAR and LiDAR) or passive (e.g.,
multispectral and hyperspectral) modes. The quality of data acquired by
remotely sensed imaging sensors (both active and passive) is often degraded by
a variety of noise types and artifacts. Image restoration, which is a vibrant
field of research in the remote sensing community, is the task of recovering
the true unknown image from the degraded observed image. Each imaging sensor
induces unique noise types and artifacts into the observed image. This fact has
led to the expansion of restoration techniques in different paths according to
each sensor type. This review paper brings together the advances of image
restoration techniques with particular focuses on synthetic aperture radar and
hyperspectral images as the most active sub-fields of image restoration in the
remote sensing community. We, therefore, provide a comprehensive,
discipline-specific starting point for researchers at different levels (i.e.,
students, researchers, and senior researchers) willing to investigate the
vibrant topic of data restoration by supplying sufficient detail and
references. Additionally, this review paper accompanies a toolbox to provide a
platform to encourage interested students and researchers in the field to
further explore the restoration techniques and fast-forward the community. The
toolboxes are provided in https://github.com/ImageRestorationToolbox.Comment: This paper is under review in GRS
Editorial to Special Issue “Remote Sensing Image Denoising, Restoration and Reconstruction”
publishedVersionNon peer reviewe
Exponential fitting for stripe noise reduction fromdental x-ray images
The applied mathematical field of inverse problems studies how to recover unknown function from a set of possibly incomplete and noisy observations. One example of real-life inverse problem is image destriping, which is the process of removing stripes from images. The stripe noise is a very common phenomenon in various of fields such as satellite remote sensing or in dental x-ray imaging.
In this thesis we study methods to remove the stripe noise from dental x-ray images. The stripes in the images are consequence of the geometry of our measurement and the sensor. In the x-ray imaging, the x-rays are sent on certain intensity through the measurable object and then the remaining intensity is measured using the x-ray detector. The detectors used in this thesis convert the remaining x-rays directly into electrical signals, which are then measured and finally processed into an image. We notice that the gained values behave according to an exponential model and use this knowledge to transform this into a nonlinear fitting problem. We study two linearization methods and three iterative methods. We examine the performance of the correction algorithms with both simulated and real stripe images.
The results of the experiments show that although some of the fitting methods give better results in the least squares sense, the exponential prior leaves some visible line artefacts. This suggests that the methods can be further improved by applying suitable regularization method. We believe that this study is a good baseline for a better correction method
Multi-scale Adaptive Fusion Network for Hyperspectral Image Denoising
Removing the noise and improving the visual quality of hyperspectral images
(HSIs) is challenging in academia and industry. Great efforts have been made to
leverage local, global or spectral context information for HSI denoising.
However, existing methods still have limitations in feature interaction
exploitation among multiple scales and rich spectral structure preservation. In
view of this, we propose a novel solution to investigate the HSI denoising
using a Multi-scale Adaptive Fusion Network (MAFNet), which can learn the
complex nonlinear mapping between clean and noisy HSI. Two key components
contribute to improving the hyperspectral image denoising: A progressively
multiscale information aggregation network and a co-attention fusion module.
Specifically, we first generate a set of multiscale images and feed them into a
coarse-fusion network to exploit the contextual texture correlation.
Thereafter, a fine fusion network is followed to exchange the information
across the parallel multiscale subnetworks. Furthermore, we design a
co-attention fusion module to adaptively emphasize informative features from
different scales, and thereby enhance the discriminative learning capability
for denoising. Extensive experiments on synthetic and real HSI datasets
demonstrate that the proposed MAFNet has achieved better denoising performance
than other state-of-the-art techniques. Our codes are available at
\verb'https://github.com/summitgao/MAFNet'.Comment: IEEE JSTASRS 2023, code at: https://github.com/summitgao/MAFNe
Interpretable Hyperspectral AI: When Non-Convex Modeling meets Hyperspectral Remote Sensing
Hyperspectral imaging, also known as image spectrometry, is a landmark
technique in geoscience and remote sensing (RS). In the past decade, enormous
efforts have been made to process and analyze these hyperspectral (HS) products
mainly by means of seasoned experts. However, with the ever-growing volume of
data, the bulk of costs in manpower and material resources poses new challenges
on reducing the burden of manual labor and improving efficiency. For this
reason, it is, therefore, urgent to develop more intelligent and automatic
approaches for various HS RS applications. Machine learning (ML) tools with
convex optimization have successfully undertaken the tasks of numerous
artificial intelligence (AI)-related applications. However, their ability in
handling complex practical problems remains limited, particularly for HS data,
due to the effects of various spectral variabilities in the process of HS
imaging and the complexity and redundancy of higher dimensional HS signals.
Compared to the convex models, non-convex modeling, which is capable of
characterizing more complex real scenes and providing the model
interpretability technically and theoretically, has been proven to be a
feasible solution to reduce the gap between challenging HS vision tasks and
currently advanced intelligent data processing models
Non-local tensor completion for multitemporal remotely sensed images inpainting
Remotely sensed images may contain some missing areas because of poor weather
conditions and sensor failure. Information of those areas may play an important
role in the interpretation of multitemporal remotely sensed data. The paper
aims at reconstructing the missing information by a non-local low-rank tensor
completion method (NL-LRTC). First, nonlocal correlations in the spatial domain
are taken into account by searching and grouping similar image patches in a
large search window. Then low-rankness of the identified 4-order tensor groups
is promoted to consider their correlations in spatial, spectral, and temporal
domains, while reconstructing the underlying patterns. Experimental results on
simulated and real data demonstrate that the proposed method is effective both
qualitatively and quantitatively. In addition, the proposed method is
computationally efficient compared to other patch based methods such as the
recent proposed PM-MTGSR method