91 research outputs found
Retinex theory for color image enhancement: A systematic review
A short but comprehensive review of Retinex has been presented in this paper. Retinex theory aims to explain human color perception. In addition, its derivation on modifying the reflectance components has introduced effective approaches for images contrast enhancement. In this review, the classical theory of Retinex has been covered. Moreover, advance and improved techniques of Retinex, proposed in the literature, have been addressed. Strength and weakness aspects of each technique are discussed and compared. An optimum parameter is needed to be determined to define the image degradation level. Such parameter determination would help in quantifying the amount of adjustment in the Retinex theory. Thus, a robust framework to modify the reflectance component of the Retinex theory can be developed to enhance the overall quality of color images
Low-Light Enhancement in the Frequency Domain
Decreased visibility, intensive noise, and biased color are the common
problems existing in low-light images. These visual disturbances further reduce
the performance of high-level vision tasks, such as object detection, and
tracking. To address this issue, some image enhancement methods have been
proposed to increase the image contrast. However, most of them are implemented
only in the spatial domain, which can be severely influenced by noise signals
while enhancing. Hence, in this work, we propose a novel residual recurrent
multi-wavelet convolutional neural network R2-MWCNN learned in the frequency
domain that can simultaneously increase the image contrast and reduce noise
signals well. This end-to-end trainable network utilizes a multi-level discrete
wavelet transform to divide input feature maps into distinct frequencies,
resulting in a better denoise impact. A channel-wise loss function is proposed
to correct the color distortion for more realistic results. Extensive
experiments demonstrate that our proposed R2-MWCNN outperforms the
state-of-the-art methods quantitively and qualitatively.Comment: 8 page
Empowering Low-Light Image Enhancer through Customized Learnable Priors
Deep neural networks have achieved remarkable progress in enhancing low-light
images by improving their brightness and eliminating noise. However, most
existing methods construct end-to-end mapping networks heuristically,
neglecting the intrinsic prior of image enhancement task and lacking
transparency and interpretability. Although some unfolding solutions have been
proposed to relieve these issues, they rely on proximal operator networks that
deliver ambiguous and implicit priors. In this work, we propose a paradigm for
low-light image enhancement that explores the potential of customized learnable
priors to improve the transparency of the deep unfolding paradigm. Motivated by
the powerful feature representation capability of Masked Autoencoder (MAE), we
customize MAE-based illumination and noise priors and redevelop them from two
perspectives: 1) \textbf{structure flow}: we train the MAE from a normal-light
image to its illumination properties and then embed it into the proximal
operator design of the unfolding architecture; and m2) \textbf{optimization
flow}: we train MAE from a normal-light image to its gradient representation
and then employ it as a regularization term to constrain noise in the model
output. These designs improve the interpretability and representation
capability of the model.Extensive experiments on multiple low-light image
enhancement datasets demonstrate the superiority of our proposed paradigm over
state-of-the-art methods. Code is available at
https://github.com/zheng980629/CUE.Comment: Accepted by ICCV 202
- …