75 research outputs found
DEEP LEARNING-BASED APPROACHES FOR IMAGE RESTORATION
Image restoration is the operation of taking a corrupted or degraded low-quality image and estimating a high-quality clean image that is free of degradations. The most common degradations that affect the quality of the image are blur, atmospheric turbulence, adverse weather conditions (like rain, haze, and snow), and noise. Images captured under the influence of these corruptions or degradations can significantly affect the performance of subsequent computer vision algorithms such as segmentation, recognition, object detection, and tracking. With such algorithms becoming vital components in several applications such as autonomous navigation and video surveillance, it is increasingly important to develop sophisticated algorithms to remove these degradations and high-quality clean images. These reasons have motivated a plethora of research on single image restoration methods to remove such effects.
Recently, following the success of deep learning-based convolutional neural networks, many approaches have been proposed to remove the degradations from the corrupted image. We study the following single image restoration problems: (i) atmospheric turbulence removal, (ii) deblurring, (iii) removing distortions introduced by adverse weather conditions such as rain, haze, and snow, and (iv) removing noise. However, existing single image restoration techniques suffer from the following major limitations: (i) They construct global priors without taking into account that these degradations can have a different effect on different local regions of the image. (ii) They use synthetic datasets for training which often results in sub-optimal performance on the real-world images, typically because of the distributional-shift between synthetic and real-world degraded images. (iii) Existing semi-supervised approaches don't account for the effect of unlabeled or real-world degraded image on semi-supervised performance.
To address these limitations, we propose supervised image restoration techniques where we use uncertainty to improve the restoration performance. To overcome the second limitation, we propose a Gaussian process-based pseudo-labeling approach to leverage the real-world rain information and train the deraininng network in a semi-supervised fashion. Furthermore, to address the third limitation we theoretically study the effect of unlabeled images on semi-supervised performance and propose an adaptive rejection technique to boost semi-supervised performance.
Finally, we recognize that existing supervised and semi-supervised methods need some kind of paired labeled data to train the network, and training on any kind of synthetic paired clean-degraded images may not completely solve the domain gap between synthetic and real-world degraded image distributions.
Thus we propose a self-supervised transformer-based approach for image denoising. Here, given a noisy image, we generate multiple down-sampled images and learn the joint relation between these down-sampled using the Gaussian process to denoise the image
Latent Degradation Representation Constraint for Single Image Deraining
Since rain streaks show a variety of shapes and directions, learning the
degradation representation is extremely challenging for single image deraining.
Existing methods are mainly targeted at designing complicated modules to
implicitly learn latent degradation representation from coupled rainy images.
This way, it is hard to decouple the content-independent degradation
representation due to the lack of explicit constraint, resulting in over- or
under-enhancement problems. To tackle this issue, we propose a novel Latent
Degradation Representation Constraint Network (LDRCNet) that consists of
Direction-Aware Encoder (DAEncoder), UNet Deraining Network, and Multi-Scale
Interaction Block (MSIBlock). Specifically, the DAEncoder is proposed to
adaptively extract latent degradation representation by using the deformable
convolutions to exploit the direction consistency of rain streaks. Next, a
constraint loss is introduced to explicitly constraint the degradation
representation learning during training. Last, we propose an MSIBlock to fuse
with the learned degradation representation and decoder features of the
deraining network for adaptive information interaction, which enables the
deraining network to remove various complicated rainy patterns and reconstruct
image details. Experimental results on synthetic and real datasets demonstrate
that our method achieves new state-of-the-art performance
Restoring Vision in Hazy Weather with Hierarchical Contrastive Learning
Image restoration under hazy weather condition, which is called single image
dehazing, has been of significant interest for various computer vision
applications. In recent years, deep learning-based methods have achieved
success. However, existing image dehazing methods typically neglect the
hierarchy of features in the neural network and fail to exploit their
relationships fully. To this end, we propose an effective image dehazing method
named Hierarchical Contrastive Dehazing (HCD), which is based on feature fusion
and contrastive learning strategies. HCD consists of a hierarchical dehazing
network (HDN) and a novel hierarchical contrastive loss (HCL). Specifically,
the core design in the HDN is a hierarchical interaction module, which utilizes
multi-scale activation to revise the feature responses hierarchically. To
cooperate with the training of HDN, we propose HCL which performs contrastive
learning on hierarchically paired exemplars, facilitating haze removal.
Extensive experiments on public datasets, RESIDE, HazeRD, and DENSE-HAZE,
demonstrate that HCD quantitatively outperforms the state-of-the-art methods in
terms of PSNR, SSIM and achieves better visual quality.Comment: 30 pages, 10 figure
- …