5,290 research outputs found
DPFNet: A Dual-branch Dilated Network with Phase-aware Fourier Convolution for Low-light Image Enhancement
Low-light image enhancement is a classical computer vision problem aiming to
recover normal-exposure images from low-light images. However, convolutional
neural networks commonly used in this field are good at sampling low-frequency
local structural features in the spatial domain, which leads to unclear texture
details of the reconstructed images. To alleviate this problem, we propose a
novel module using the Fourier coefficients, which can recover high-quality
texture details under the constraint of semantics in the frequency phase and
supplement the spatial domain. In addition, we design a simple and efficient
module for the image spatial domain using dilated convolutions with different
receptive fields to alleviate the loss of detail caused by frequent
downsampling. We integrate the above parts into an end-to-end dual branch
network and design a novel loss committee and an adaptive fusion module to
guide the network to flexibly combine spatial and frequency domain features to
generate more pleasing visual effects. Finally, we evaluate the proposed
network on public benchmarks. Extensive experimental results show that our
method outperforms many existing state-of-the-art ones, showing outstanding
performance and potential
DeepFuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs
We present a novel deep learning architecture for fusing static
multi-exposure images. Current multi-exposure fusion (MEF) approaches use
hand-crafted features to fuse input sequence. However, the weak hand-crafted
representations are not robust to varying input conditions. Moreover, they
perform poorly for extreme exposure image pairs. Thus, it is highly desirable
to have a method that is robust to varying input conditions and capable of
handling extreme exposure without artifacts. Deep representations have known to
be robust to input conditions and have shown phenomenal performance in a
supervised setting. However, the stumbling block in using deep learning for MEF
was the lack of sufficient training data and an oracle to provide the
ground-truth for supervision. To address the above issues, we have gathered a
large dataset of multi-exposure image stacks for training and to circumvent the
need for ground truth images, we propose an unsupervised deep learning
framework for MEF utilizing a no-reference quality metric as loss function. The
proposed approach uses a novel CNN architecture trained to learn the fusion
operation without reference ground truth image. The model fuses a set of common
low level features extracted from each image to generate artifact-free
perceptually pleasing results. We perform extensive quantitative and
qualitative evaluation and show that the proposed technique outperforms
existing state-of-the-art approaches for a variety of natural images.Comment: ICCV 201
- …