517 research outputs found
Multi-Scale Fusion of Enhanced Hazy Images Using Particle Swarm Optimization and Fuzzy Intensification Operators
Dehazing from a single image is still a challenging task, where the thickness of the haze depends on depth information. Researchers focus on this area by eliminating haze from the single image by using restoration techniques based on haze image model. Using haze image model, the haze is eliminated by estimating atmospheric light, transmission, and depth. A few researchers have focused on enhancement based methods for eliminating haze from images. Enhancement based dehazing algorithms will lead to saturation of pixels in the enhanced image. This is due to assigning fixed values to the parameters used to enhance an image. Therefore, the enhancement based methods fail in the proper tuning of the parameters. This can be overcome by optimizing the parameters that are used to enhance the images. This paper describes the research work carried to derive two enhanced images from a single input hazy image using particle swarm optimization and fuzzy intensification operators. The two derived images are further fused using multi-scale fusion technique. The objective evaluation shows that the entropy of the haze eliminated images is comparatively better than the state-of-the-art algorithms. Also, the fog density is measured using an evaluator known as fog aware density evaluator (FADE), which considers all the statistical parameters to differentiate a hazy image from a highly visible natural image. Using this evaluator we found that the density of the fog is less in our proposed method when compared with enhancement based algorithms used to eliminate haze from images
Learning to Dehaze from Realistic Scene with A Fast Physics-based Dehazing Network
Dehazing is a popular computer vision topic for long. A real-time dehazing
method with reliable performance is highly desired for many applications such
as autonomous driving. While recent learning-based methods require datasets
containing pairs of hazy images and clean ground truth references, it is
generally impossible to capture accurate ground truth in real scenes. Many
existing works compromise this difficulty to generate hazy images by rendering
the haze from depth on common RGBD datasets using the haze imaging model.
However, there is still a gap between the synthetic datasets and real hazy
images as large datasets with high-quality depth are mostly indoor and depth
maps for outdoor are imprecise. In this paper, we complement the existing
datasets with a new, large, and diverse dehazing dataset containing real
outdoor scenes from High-Definition (HD) 3D movies. We select a large number of
high-quality frames of real outdoor scenes and render haze on them using depth
from stereo. Our dataset is more realistic than existing ones and we
demonstrate that using this dataset greatly improves the dehazing performance
on real scenes. In addition to the dataset, we also propose a light and
reliable dehazing network inspired by the physics model. Our approach
outperforms other methods by a large margin and becomes the new
state-of-the-art method. Moreover, the light-weight design of the network
enables our method to run at a real-time speed, which is much faster than other
baseline methods
Enhancing Visibility in Nighttime Haze Images Using Guided APSF and Gradient Adaptive Convolution
Visibility in hazy nighttime scenes is frequently reduced by multiple
factors, including low light, intense glow, light scattering, and the presence
of multicolored light sources. Existing nighttime dehazing methods often
struggle with handling glow or low-light conditions, resulting in either
excessively dark visuals or unsuppressed glow outputs. In this paper, we
enhance the visibility from a single nighttime haze image by suppressing glow
and enhancing low-light regions. To handle glow effects, our framework learns
from the rendered glow pairs. Specifically, a light source aware network is
proposed to detect light sources of night images, followed by the APSF (Angular
Point Spread Function)-guided glow rendering. Our framework is then trained on
the rendered images, resulting in glow suppression. Moreover, we utilize
gradient-adaptive convolution, to capture edges and textures in hazy scenes. By
leveraging extracted edges and textures, we enhance the contrast of the scene
without losing important structural details. To boost low-light intensity, our
network learns an attention map, then adjusted by gamma correction. This
attention has high values on low-light regions and low values on haze and glow
regions. Extensive evaluation on real nighttime haze images, demonstrates the
effectiveness of our method. Our experiments demonstrate that our method
achieves a PSNR of 30.38dB, outperforming state-of-the-art methods by 13 on
GTA5 nighttime haze dataset. Our data and code is available at:
\url{https://github.com/jinyeying/nighttime_dehaze}.Comment: Accepted to ACM'MM2023, https://github.com/jinyeying/nighttime_dehaz
- …