281 research outputs found
Low-Light Enhancement in the Frequency Domain
Decreased visibility, intensive noise, and biased color are the common
problems existing in low-light images. These visual disturbances further reduce
the performance of high-level vision tasks, such as object detection, and
tracking. To address this issue, some image enhancement methods have been
proposed to increase the image contrast. However, most of them are implemented
only in the spatial domain, which can be severely influenced by noise signals
while enhancing. Hence, in this work, we propose a novel residual recurrent
multi-wavelet convolutional neural network R2-MWCNN learned in the frequency
domain that can simultaneously increase the image contrast and reduce noise
signals well. This end-to-end trainable network utilizes a multi-level discrete
wavelet transform to divide input feature maps into distinct frequencies,
resulting in a better denoise impact. A channel-wise loss function is proposed
to correct the color distortion for more realistic results. Extensive
experiments demonstrate that our proposed R2-MWCNN outperforms the
state-of-the-art methods quantitively and qualitatively.Comment: 8 page
Unsupervised Low Light Image Enhancement Using SNR-Aware Swin Transformer
Image captured under low-light conditions presents unpleasing artifacts,
which debilitate the performance of feature extraction for many upstream visual
tasks. Low-light image enhancement aims at improving brightness and contrast,
and further reducing noise that corrupts the visual quality. Recently, many
image restoration methods based on Swin Transformer have been proposed and
achieve impressive performance. However, On one hand, trivially employing Swin
Transformer for low-light image enhancement would expose some artifacts,
including over-exposure, brightness imbalance and noise corruption, etc. On the
other hand, it is impractical to capture image pairs of low-light images and
corresponding ground-truth, i.e. well-exposed image in same visual scene. In
this paper, we propose a dual-branch network based on Swin Transformer, guided
by a signal-to-noise ratio prior map which provides the spatial-varying
information for low-light image enhancement. Moreover, we leverage unsupervised
learning to construct the optimization objective based on Retinex model, to
guide the training of proposed network. Experimental results demonstrate that
the proposed model is competitive with the baseline models
Low-Light Image Enhancement with Illumination-Aware Gamma Correction and Complete Image Modelling Network
This paper presents a novel network structure with illumination-aware gamma
correction and complete image modelling to solve the low-light image
enhancement problem. Low-light environments usually lead to less informative
large-scale dark areas, directly learning deep representations from low-light
images is insensitive to recovering normal illumination. We propose to
integrate the effectiveness of gamma correction with the strong modelling
capacities of deep networks, which enables the correction factor gamma to be
learned in a coarse to elaborate manner via adaptively perceiving the deviated
illumination. Because exponential operation introduces high computational
complexity, we propose to use Taylor Series to approximate gamma correction,
accelerating the training and inference speed. Dark areas usually occupy large
scales in low-light images, common local modelling structures, e.g., CNN,
SwinIR, are thus insufficient to recover accurate illumination across whole
low-light images. We propose a novel Transformer block to completely simulate
the dependencies of all pixels across images via a local-to-global hierarchical
attention mechanism, so that dark areas could be inferred by borrowing the
information from far informative regions in a highly effective manner.
Extensive experiments on several benchmark datasets demonstrate that our
approach outperforms state-of-the-art methods.Comment: Accepted by ICCV 202
Enlighten-anything:When Segment Anything Model Meets Low-light Image Enhancement
Image restoration is a low-level visual task, and most CNN methods are
designed as black boxes, lacking transparency and intrinsic aesthetics. Many
unsupervised approaches ignore the degradation of visible information in
low-light scenes, which will seriously affect the aggregation of complementary
information and also make the fusion algorithm unable to produce satisfactory
fusion results under extreme conditions. In this paper, we propose
Enlighten-anything, which is able to enhance and fuse the semantic intent of
SAM segmentation with low-light images to obtain fused images with good visual
perception. The generalization ability of unsupervised learning is greatly
improved, and experiments on LOL dataset are conducted to show that our method
improves 3db in PSNR over baseline and 8 in SSIM. zero-shot learning of SAM
introduces a powerful aid for unsupervised low-light enhancement. The source
code of Rethink-Diffusion can be obtained from
https://github.com/zhangbaijin/enlighten-anythin
Joint Correcting and Refinement for Balanced Low-Light Image Enhancement
Low-light image enhancement tasks demand an appropriate balance among
brightness, color, and illumination. While existing methods often focus on one
aspect of the image without considering how to pay attention to this balance,
which will cause problems of color distortion and overexposure etc. This
seriously affects both human visual perception and the performance of
high-level visual models. In this work, a novel synergistic structure is
proposed which can balance brightness, color, and illumination more
effectively. Specifically, the proposed method, so-called Joint Correcting and
Refinement Network (JCRNet), which mainly consists of three stages to balance
brightness, color, and illumination of enhancement. Stage 1: we utilize a basic
encoder-decoder and local supervision mechanism to extract local information
and more comprehensive details for enhancement. Stage 2: cross-stage feature
transmission and spatial feature transformation further facilitate color
correction and feature refinement. Stage 3: we employ a dynamic illumination
adjustment approach to embed residuals between predicted and ground truth
images into the model, adaptively adjusting illumination balance. Extensive
experiments demonstrate that the proposed method exhibits comprehensive
performance advantages over 21 state-of-the-art methods on 9 benchmark
datasets. Furthermore, a more persuasive experiment has been conducted to
validate our approach the effectiveness in downstream visual tasks (e.g.,
saliency detection). Compared to several enhancement models, the proposed
method effectively improves the segmentation results and quantitative metrics
of saliency detection. The source code will be available at
https://github.com/woshiyll/JCRNet
Extremely Low-light Image Enhancement with Scene Text Restoration
Deep learning-based methods have made impressive progress in enhancing
extremely low-light images - the image quality of the reconstructed images has
generally improved. However, we found out that most of these methods could not
sufficiently recover the image details, for instance, the texts in the scene.
In this paper, a novel image enhancement framework is proposed to precisely
restore the scene texts, as well as the overall quality of the image
simultaneously under extremely low-light images conditions. Mainly, we employed
a self-regularised attention map, an edge map, and a novel text detection loss.
In addition, leveraging synthetic low-light images is beneficial for image
enhancement on the genuine ones in terms of text detection. The quantitative
and qualitative experimental results have shown that the proposed model
outperforms state-of-the-art methods in image restoration, text detection, and
text spotting on See In the Dark and ICDAR15 datasets
Low-Light Hyperspectral Image Enhancement
Due to inadequate energy captured by the hyperspectral camera sensor in poor
illumination conditions, low-light hyperspectral images (HSIs) usually suffer
from low visibility, spectral distortion, and various noises. A range of HSI
restoration methods have been developed, yet their effectiveness in enhancing
low-light HSIs is constrained. This work focuses on the low-light HSI
enhancement task, which aims to reveal the spatial-spectral information hidden
in darkened areas. To facilitate the development of low-light HSI processing,
we collect a low-light HSI (LHSI) dataset of both indoor and outdoor scenes.
Based on Laplacian pyramid decomposition and reconstruction, we developed an
end-to-end data-driven low-light HSI enhancement (HSIE) approach trained on the
LHSI dataset. With the observation that illumination is related to the
low-frequency component of HSI, while textural details are closely correlated
to the high-frequency component, the proposed HSIE is designed to have two
branches. The illumination enhancement branch is adopted to enlighten the
low-frequency component with reduced resolution. The high-frequency refinement
branch is utilized for refining the high-frequency component via a predicted
mask. In addition, to improve information flow and boost performance, we
introduce an effective channel attention block (CAB) with residual dense
connection, which served as the basic block of the illumination enhancement
branch. The effectiveness and efficiency of HSIE both in quantitative
assessment measures and visual effects are demonstrated by experimental results
on the LHSI dataset. According to the classification performance on the remote
sensing Indian Pines dataset, downstream tasks benefit from the enhanced HSI.
Datasets and codes are available:
\href{https://github.com/guanguanboy/HSIE}{https://github.com/guanguanboy/HSIE}
- …