411 research outputs found
Fully Point-wise Convolutional Neural Network for Modeling Statistical Regularities in Natural Images
Modeling statistical regularity plays an essential role in ill-posed image
processing problems. Recently, deep learning based methods have been presented
to implicitly learn statistical representation of pixel distributions in
natural images and leverage it as a constraint to facilitate subsequent tasks,
such as color constancy and image dehazing. However, the existing CNN
architecture is prone to variability and diversity of pixel intensity within
and between local regions, which may result in inaccurate statistical
representation. To address this problem, this paper presents a novel fully
point-wise CNN architecture for modeling statistical regularities in natural
images. Specifically, we propose to randomly shuffle the pixels in the origin
images and leverage the shuffled image as input to make CNN more concerned with
the statistical properties. Moreover, since the pixels in the shuffled image
are independent identically distributed, we can replace all the large
convolution kernels in CNN with point-wise () convolution kernels while
maintaining the representation ability. Experimental results on two
applications: color constancy and image dehazing, demonstrate the superiority
of our proposed network over the existing architectures, i.e., using
1/101/100 network parameters and computational cost while achieving
comparable performance.Comment: 9 pages, 7 figures. To appear in ACM MM 201
Enhancing Visibility in Nighttime Haze Images Using Guided APSF and Gradient Adaptive Convolution
Visibility in hazy nighttime scenes is frequently reduced by multiple
factors, including low light, intense glow, light scattering, and the presence
of multicolored light sources. Existing nighttime dehazing methods often
struggle with handling glow or low-light conditions, resulting in either
excessively dark visuals or unsuppressed glow outputs. In this paper, we
enhance the visibility from a single nighttime haze image by suppressing glow
and enhancing low-light regions. To handle glow effects, our framework learns
from the rendered glow pairs. Specifically, a light source aware network is
proposed to detect light sources of night images, followed by the APSF (Angular
Point Spread Function)-guided glow rendering. Our framework is then trained on
the rendered images, resulting in glow suppression. Moreover, we utilize
gradient-adaptive convolution, to capture edges and textures in hazy scenes. By
leveraging extracted edges and textures, we enhance the contrast of the scene
without losing important structural details. To boost low-light intensity, our
network learns an attention map, then adjusted by gamma correction. This
attention has high values on low-light regions and low values on haze and glow
regions. Extensive evaluation on real nighttime haze images, demonstrates the
effectiveness of our method. Our experiments demonstrate that our method
achieves a PSNR of 30.38dB, outperforming state-of-the-art methods by 13 on
GTA5 nighttime haze dataset. Our data and code is available at:
\url{https://github.com/jinyeying/nighttime_dehaze}.Comment: Accepted to ACM'MM2023, https://github.com/jinyeying/nighttime_dehaz
Rich Feature Distillation with Feature Affinity Module for Efficient Image Dehazing
Single-image haze removal is a long-standing hurdle for computer vision
applications. Several works have been focused on transferring advances from
image classification, detection, and segmentation to the niche of image
dehazing, primarily focusing on contrastive learning and knowledge
distillation. However, these approaches prove computationally expensive,
raising concern regarding their applicability to on-the-edge use-cases. This
work introduces a simple, lightweight, and efficient framework for single-image
haze removal, exploiting rich "dark-knowledge" information from a lightweight
pre-trained super-resolution model via the notion of heterogeneous knowledge
distillation. We designed a feature affinity module to maximize the flow of
rich feature semantics from the super-resolution teacher to the student
dehazing network. In order to evaluate the efficacy of our proposed framework,
its performance as a plug-and-play setup to a baseline model is examined. Our
experiments are carried out on the RESIDE-Standard dataset to demonstrate the
robustness of our framework to the synthetic and real-world domains. The
extensive qualitative and quantitative results provided establish the
effectiveness of the framework, achieving gains of upto 15\% (PSNR) while
reducing the model size by 20 times.Comment: Preprint version. Accepted at Opti
A hybrid of fuzzy theory and quadratic function for estimating and refining transmission map
© TÜBİTAK In photographs captured in outdoor environments, particles in the air cause light attenuation and degrade image quality. This effect is especially obvious in hazy environments. In this study, a fuzzy theory is proposed to estimate the transmission map of a single image. To overcome the problem of oversaturation in dehazed images, a quadratic-function-based method is proposed to refine the transmission map. In addition, the color vector of the atmospheric light is estimated using the top 1% of the brightest light area. Finally, the dehazed image is reconstructed using the transmission map and the estimated atmospheric light. Experimental results demonstrate that the proposed hybrid method performs better than the other existing methods in terms of color oversaturation, visibility, and quantitative evaluation
- …