2 research outputs found
Convolutional Neural Networks Considering Local and Global features for Image Enhancement
In this paper, we propose a novel convolutional neural network (CNN)
architecture considering both local and global features for image enhancement.
Most conventional image enhancement methods, including Retinex-based methods,
cannot restore lost pixel values caused by clipping and quantizing. CNN-based
methods have recently been proposed to solve the problem, but they still have a
limited performance due to network architectures not handling global features.
To handle both local and global features, the proposed architecture consists of
three networks: a local encoder, a global encoder, and a decoder. In addition,
high dynamic range (HDR) images are used for generating training data for our
networks. The use of HDR images makes it possible to train CNNs with
better-quality images than images directly captured with cameras. Experimental
results show that the proposed method can produce higher-quality images than
conventional image enhancement methods including CNN-based methods, in terms of
various objective quality metrics: TMQI, entropy, NIQE, and BRISQUE.Comment: To appear in Proc. ICIP2019. arXiv admin note: text overlap with
arXiv:1901.0568
Scene Segmentation-Based Luminance Adjustment for Multi-Exposure Image Fusion
We propose a novel method for adjusting luminance for multi-exposure image
fusion. For the adjustment, two novel scene segmentation approaches based on
luminance distribution are also proposed. Multi-exposure image fusion is a
method for producing images that are expected to be more informative and
perceptually appealing than any of the input ones, by directly fusing photos
taken with different exposures. However, existing fusion methods often produce
unclear fused images when input images do not have a sufficient number of
different exposure levels. In this paper, we point out that adjusting the
luminance of input images makes it possible to improve the quality of the final
fused images. This insight is the basis of the proposed method. The proposed
method enables us to produce high-quality images, even when undesirable inputs
are given. Visual comparison results show that the proposed method can produce
images that clearly represent a whole scene. In addition, multi-exposure image
fusion with the proposed method outperforms state-of-the-art fusion methods in
terms of MEF-SSIM, discrete entropy, tone mapped image quality index, and
statistical naturalness.Comment: will be published in IEEE Transactions on Image Processin