8,259 research outputs found
Low-Light Enhancement in the Frequency Domain
Decreased visibility, intensive noise, and biased color are the common
problems existing in low-light images. These visual disturbances further reduce
the performance of high-level vision tasks, such as object detection, and
tracking. To address this issue, some image enhancement methods have been
proposed to increase the image contrast. However, most of them are implemented
only in the spatial domain, which can be severely influenced by noise signals
while enhancing. Hence, in this work, we propose a novel residual recurrent
multi-wavelet convolutional neural network R2-MWCNN learned in the frequency
domain that can simultaneously increase the image contrast and reduce noise
signals well. This end-to-end trainable network utilizes a multi-level discrete
wavelet transform to divide input feature maps into distinct frequencies,
resulting in a better denoise impact. A channel-wise loss function is proposed
to correct the color distortion for more realistic results. Extensive
experiments demonstrate that our proposed R2-MWCNN outperforms the
state-of-the-art methods quantitively and qualitatively.Comment: 8 page
Extremely Low-light Image Enhancement with Scene Text Restoration
Deep learning-based methods have made impressive progress in enhancing
extremely low-light images - the image quality of the reconstructed images has
generally improved. However, we found out that most of these methods could not
sufficiently recover the image details, for instance, the texts in the scene.
In this paper, a novel image enhancement framework is proposed to precisely
restore the scene texts, as well as the overall quality of the image
simultaneously under extremely low-light images conditions. Mainly, we employed
a self-regularised attention map, an edge map, and a novel text detection loss.
In addition, leveraging synthetic low-light images is beneficial for image
enhancement on the genuine ones in terms of text detection. The quantitative
and qualitative experimental results have shown that the proposed model
outperforms state-of-the-art methods in image restoration, text detection, and
text spotting on See In the Dark and ICDAR15 datasets
DPFNet: A Dual-branch Dilated Network with Phase-aware Fourier Convolution for Low-light Image Enhancement
Low-light image enhancement is a classical computer vision problem aiming to
recover normal-exposure images from low-light images. However, convolutional
neural networks commonly used in this field are good at sampling low-frequency
local structural features in the spatial domain, which leads to unclear texture
details of the reconstructed images. To alleviate this problem, we propose a
novel module using the Fourier coefficients, which can recover high-quality
texture details under the constraint of semantics in the frequency phase and
supplement the spatial domain. In addition, we design a simple and efficient
module for the image spatial domain using dilated convolutions with different
receptive fields to alleviate the loss of detail caused by frequent
downsampling. We integrate the above parts into an end-to-end dual branch
network and design a novel loss committee and an adaptive fusion module to
guide the network to flexibly combine spatial and frequency domain features to
generate more pleasing visual effects. Finally, we evaluate the proposed
network on public benchmarks. Extensive experimental results show that our
method outperforms many existing state-of-the-art ones, showing outstanding
performance and potential
- …