3 research outputs found
Removal of visual disruption caused by rain using cycle-consistent generative adversarial networks
This paper addresses the problem of removing rain disruption from images without blurring scene content, thereby retaining the visual quality of the image. This is particularly important in maintaining the performance of outdoor vision systems, which deteriorates with increasing rain disruption or degradation on the visual quality of the image. In this paper, the Cycle-Consistent Generative Adversarial Network (CycleGAN) is proposed as a more promising rain removal algorithm, as compared to the state-of-the-art Image De-raining Conditional Generative Adversarial Network (ID-CGAN). One of the main advantages of the CycleGAN is its ability to learn the underlying relationship between
the rain and rain-free domain without the need of paired domain examples, which is essential for rain removal as it is not possible to obtain the rain-free image under dynamic outdoor conditions. Based on the physical properties and the various types of rain phenomena [10], five broad categories of real rain distortions are proposed, which can be applied to the majority of outdoor rain conditions. For a fair comparison, both the ID-CGAN and CycleGAN were trained on the same set of 700 synthesized rain-and-ground-truth image-pairs. Subsequently, both networks were tested on real rain images, which fall broadly under these five categories. A comparison of the performance between the CycleGAN and the ID-CGAN demonstrated that the CycleGAN is superior in removing real rain distortions
Heavy Rain Face Image Restoration: Integrating Physical Degradation Model and Facial Component Guided Adversarial Learning
With the recent increase in intelligent CCTVs for visual surveillance, a new
image degradation that integrates resolution conversion and synthetic rain
models is required. For example, in heavy rain, face images captured by CCTV
from a distance have significant deterioration in both visibility and
resolution. Unlike traditional image degradation models (IDM), such as rain
removal and superresolution, this study addresses a new IDM referred to as a
scale-aware heavy rain model and proposes a method for restoring
high-resolution face images (HR-FIs) from low-resolution heavy rain face images
(LRHR-FI). To this end, a 2-stage network is presented. The first stage
generates low-resolution face images (LR-FIs), from which heavy rain has been
removed from the LRHR-FIs to improve visibility. To realize this, an
interpretable IDM-based network is constructed to predict physical parameters,
such as rain streaks, transmission maps, and atmospheric light. In addition,
the image reconstruction loss is evaluated to enhance the estimates of the
physical parameters. For the second stage, which aims to reconstruct the HR-FIs
from the LR-FIs outputted in the first stage, facial component guided
adversarial learning (FCGAL) is applied to boost facial structure expressions.
To focus on informative facial features and reinforce the authenticity of
facial components, such as the eyes and nose, a face-parsing-guided generator
and facial local discriminators are designed for FCGAL. The experimental
results verify that the proposed approach based on physical-based network
design and FCGAL can remove heavy rain and increase the resolution and
visibility simultaneously. Moreover, the proposed heavy-rain face image
restoration outperforms state-of-the-art models of heavy rain removal,
image-to-image translation, and superresolution