3,443 research outputs found
GlowGAN: Unsupervised Learning of HDR Images from LDR Images in the Wild
Most in-the-wild images are stored in Low Dynamic Range (LDR) form, servingas a partial observation of the High Dynamic Range (HDR) visual world. Despitelimited dynamic range, these LDR images are often captured with differentexposures, implicitly containing information about the underlying HDR imagedistribution. Inspired by this intuition, in this work we present, to the bestof our knowledge, the first method for learning a generative model of HDRimages from in-the-wild LDR image collections in a fully unsupervised manner.The key idea is to train a generative adversarial network (GAN) to generate HDRimages which, when projected to LDR under various exposures, areindistinguishable from real LDR images. The projection from HDR to LDR isachieved via a camera model that captures the stochasticity in exposure andcamera response function. Experiments show that our method GlowGAN cansynthesize photorealistic HDR images in many challenging cases such aslandscapes, lightning, or windows, where previous supervised generative modelsproduce overexposed images. We further demonstrate the new application ofunsupervised inverse tone mapping (ITM) enabled by GlowGAN. Our ITM method doesnot need HDR images or paired multi-exposure images for training, yet itreconstructs more plausible information for overexposed regions thanstate-of-the-art supervised learning models trained on such data.<br
GlowGAN: Unsupervised Learning of HDR Images from LDR Images in the Wild
Most in-the-wild images are stored in Low Dynamic Range (LDR) form, serving
as a partial observation of the High Dynamic Range (HDR) visual world. Despite
limited dynamic range, these LDR images are often captured with different
exposures, implicitly containing information about the underlying HDR image
distribution. Inspired by this intuition, in this work we present, to the best
of our knowledge, the first method for learning a generative model of HDR
images from in-the-wild LDR image collections in a fully unsupervised manner.
The key idea is to train a generative adversarial network (GAN) to generate HDR
images which, when projected to LDR under various exposures, are
indistinguishable from real LDR images. The projection from HDR to LDR is
achieved via a camera model that captures the stochasticity in exposure and
camera response function. Experiments show that our method GlowGAN can
synthesize photorealistic HDR images in many challenging cases such as
landscapes, lightning, or windows, where previous supervised generative models
produce overexposed images. We further demonstrate the new application of
unsupervised inverse tone mapping (ITM) enabled by GlowGAN. Our ITM method does
not need HDR images or paired multi-exposure images for training, yet it
reconstructs more plausible information for overexposed regions than
state-of-the-art supervised learning models trained on such data
Learning a Practical SDR-to-HDRTV Up-conversion using New Dataset and Degradation Models
In media industry, the demand of SDR-to-HDRTV up-conversion arises when users
possess HDR-WCG (high dynamic range-wide color gamut) TVs while most
off-the-shelf footage is still in SDR (standard dynamic range). The research
community has started tackling this low-level vision task by learning-based
approaches. When applied to real SDR, yet, current methods tend to produce dim
and desaturated result, making nearly no improvement on viewing experience.
Different from other network-oriented methods, we attribute such deficiency to
training set (HDR-SDR pair). Consequently, we propose new HDRTV dataset (dubbed
HDRTV4K) and new HDR-to-SDR degradation models. Then, it's used to train a
luminance-segmented network (LSN) consisting of a global mapping trunk, and two
Transformer branches on bright and dark luminance range. We also update
assessment criteria by tailored metrics and subjective experiment. Finally,
ablation studies are conducted to prove the effectiveness. Our work is
available at: https://github.com/AndreGuo/HDRTVDM.Comment: Accepted by CVPR202
End-to-End Differentiable Learning to HDR Image Synthesis for Multi-exposure Images
Recently, high dynamic range (HDR) image reconstruction based on the multiple
exposure stack from a given single exposure utilizes a deep learning framework
to generate high-quality HDR images. These conventional networks focus on the
exposure transfer task to reconstruct the multi-exposure stack. Therefore, they
often fail to fuse the multi-exposure stack into a perceptually pleasant HDR
image as the inversion artifacts occur. We tackle the problem in stack
reconstruction-based methods by proposing a novel framework with a fully
differentiable high dynamic range imaging (HDRI) process. By explicitly using
the loss, which compares the network's output with the ground truth HDR image,
our framework enables a neural network that generates the multiple exposure
stack for HDRI to train stably. In other words, our differentiable HDR
synthesis layer helps the deep neural network to train to create multi-exposure
stacks while reflecting the precise correlations between multi-exposure images
in the HDRI process. In addition, our network uses the image decomposition and
the recursive process to facilitate the exposure transfer task and to
adaptively respond to recursion frequency. The experimental results show that
the proposed network outperforms the state-of-the-art quantitative and
qualitative results in terms of both the exposure transfer tasks and the whole
HDRI process
๋ค์ค ๋ ธ์ถ ์ ๋ ฅ์ ํผ์ณ ๋ถํด๋ฅผ ํตํ ํ์ด ๋ค์ด๋๋ฏน ๋ ์ธ์ง ์์ ์์ฑ ๋ฐฉ๋ฒ
ํ์๋
ผ๋ฌธ(์์ฌ) -- ์์ธ๋ํ๊ต๋ํ์ : ๊ณต๊ณผ๋ํ ํ๋๊ณผ์ ์ธ๊ณต์ง๋ฅ์ ๊ณต, 2022. 8. ์กฐ๋จ์ต.Multi-exposure high dynamic range (HDR) imaging aims to generate an HDR image from multiple differently exposed low dynamic range (LDR) images. Multi-exposure HDR imaging is a challenging task due to two major problems. One is misalignments among the input LDR images, which can cause ghosting artifacts on result HDR, and the other is missing information on LDR images due to under-/over-exposed region. Although previous methods tried to align input LDR images with traditional methods(e.g., homography, optical flow), they still suffer undesired artifacts on the result HDR image due to estimation errors that occurred in aligning step.
In this dissertation, disentangled feature-guided HDR network (DFGNet) is proposed to alleviate the above-stated problems. Specifically, exposure features and spatial features are first extracted from input LDR images, and they are disentangled from each other. Then, these features are processed through the proposed DFG modules, which produce a high-quality HDR image. The proposed DFGNet shows outstanding performance compared to previous methods, achieving the PSNR-โ of 41.89dB and the PSNR-ฮผ of 44.19dB.๋ค์ค ๋
ธ์ถ(Multiple-exposure) ํ์ด ๋ค์ด๋๋ฏน ๋ ์ธ์ง(High Dynamic Range, HDR) ์ด๋ฏธ์ง์ ๊ฐ๊ฐ ๋ค๋ฅธ ๋
ธ์ถ ์ ๋๋ก ์ดฌ์๋ ๋ค์์ ๋ก์ฐ ๋ค์ด๋๋ฏน ๋ ์ธ์ง(Low Dynamic Range, LDR) ์ด๋ฏธ์ง๋ฅผ ์ฌ์ฉํ์ฌ ํ๋์ HDR ์ด๋ฏธ์ง๋ฅผ ์์ฑํ๋ ๊ฒ์ ๋ชฉํ๋ก ํ๋ค. ๋ค์ค ๋
ธ์ถ HDR ์ด๋ฏธ์ง์ ๋ ๊ฐ์ง ์ฃผ์ ๋ฌธ์ ์ ๋๋ฌธ์ ์ด๋ ค์์ด ์๋๋ฐ, ํ๋๋ ์
๋ ฅ LDR ์ด๋ฏธ์ง๋ค์ด ์ ๋ ฌ๋์ง ์์ ๊ฒฐ๊ณผ HDR ์ด๋ฏธ์ง์์ ๊ณ ์คํธ ์ํฐํฉํธ(Ghosting Artifact)๊ฐ ๋ฐ์ํ ์ ์๋ค๋ ์ ๊ณผ, ๋ ๋ค๋ฅธ ํ๋๋ LDR ์ด๋ฏธ์ง๋ค์ ๊ณผ์๋
ธ์ถ(Under-exposure) ๋ฐ ๊ณผ๋ค๋
ธ์ถ(Over-exposure) ๋ ์์ญ์์ ์ ๋ณด ์์ค์ด ๋ฐ์ํ๋ค๋ ์ ์ด๋ค. ๊ณผ๊ฑฐ์ ๋ฐฉ๋ฒ๋ค์ด ๊ณ ์ ์ ์ธ ์ด๋ฏธ์ง ์ ๋ ฌ ๋ฐฉ๋ฒ๋ค(e.g., homography, optical flow)์ ์ฌ์ฉํ์ฌ ์
๋ ฅ LDR ์ด๋ฏธ์ง๋ค์ ์ ์ฒ๋ฆฌ ๊ณผ์ ์์ ์ ๋ ฌํ ์ฌ ๋ณํฉํ๋ ์๋๋ฅผ ํ์ง๋ง, ์ด ๊ณผ์ ์์ ๋ฐ์ํ๋ ์ถ์ ์ค๋ฅ๋ก ์ธํด ์ดํ ๋จ๊ณ์ ์
์ํญ์ ๋ฏธ์นจ์ผ๋ก์จ ๋ฐ์ํ๋ ์ฌ๋ฌ๊ฐ์ง ๋ถ์ ์ ํ ์ํฐํฉํธ๋ค์ด ๊ฒฐ๊ณผ HDR ์ด๋ฏธ์ง์์ ๋ํ๋๊ณ ์๋ค.
๋ณธ ์ฌ์ฌ์์๋ ํผ์ณ ๋ถํด๋ฅผ ์์ฉํ HDR ๋คํธ์ํฌ๋ฅผ ์ ์ํ์ฌ, ์ธ๊ธ๋ ๋ฌธ์ ๋ค์ ๊ฒฝ๊ฐํ๊ณ ์ ํ๋ค. ๊ตฌ์ฒด์ ์ผ๋ก, ๋จผ์ LDR ์ด๋ฏธ์ง๋ค์ ๋
ธ์ถ ํผ์ณ์ ๊ณต๊ฐ ํผ์ณ๋ก ๋ถํดํ๊ณ , ๋ถํด๋ ํผ์ณ๋ฅผ HDR ๋คํธ์ํฌ์์ ํ์ฉํจ์ผ๋ก์จ ๊ณ ํ์ง์ HDR ์ด๋ฏธ์ง
๋ฅผ ์์ฑํ ์ ์๋๋ก ํ๋ค. ์ ์ํ ๋คํธ์ํฌ๋ ์ฑ๋ฅ ์งํ์ธ PSNR-โ๊ณผ PSNR-ฮผ์์ ๊ฐ๊ฐ 41.89dB, 44.19dB์ ์ฑ๋ฅ์ ๋ฌ์ฑํจ์ผ๋ก์จ, ๊ธฐ์กด ๋ฐฉ๋ฒ๋ค๋ณด๋ค ์ฐ์ํจ์ ์
์ฆํ๋ค.1 Introduction 1
2 Related Works 4
2.1 Single-frame HDR imaging 4
2.2 Multi-frame HDR imaging with dynamic scenes 6
3 Proposed Method 10
3.1 Disentangle Network for Feature Extraction 10
3.2 Disentangle Features Guided Network 16
4 Experimental Results 22
4.1 Implementation and Details 22
4.2 Comparison with State-of-the-art Methods 22
5 Ablation Study 30
5.1 Impact of Proposed Modules 30
6 Conclusion 32
Abstract (In Korean) 39์
Redistributing the Precision and Content in 3D-LUT-based Inverse Tone-mapping for HDR/WCG Display
ITM(inverse tone-mapping) converts SDR (standard dynamic range) footage to
HDR/WCG (high dynamic range /wide color gamut) for media production. It happens
not only when remastering legacy SDR footage in front-end content provider, but
also adapting on-theair SDR service on user-end HDR display. The latter
requires more efficiency, thus the pre-calculated LUT (look-up table) has
become a popular solution. Yet, conventional fixed LUT lacks adaptability, so
we learn from research community and combine it with AI. Meanwhile,
higher-bit-depth HDR/WCG requires larger LUT than SDR, so we consult
traditional ITM for an efficiency-performance trade-off: We use 3 smaller LUTs,
each has a non-uniform packing (precision) respectively denser in dark, middle
and bright luma range. In this case, their results will have less error only in
their own range, so we use a contribution map to combine their best parts to
final result. With the guidance of this map, the elements (content) of 3 LUTs
will also be redistributed during training. We conduct ablation studies to
verify method's effectiveness, and subjective and objective experiments to show
its practicability. Code is available at: https://github.com/AndreGuo/ITMLUT.Comment: Accepted in CVMP2023 (the 20th ACM SIGGRAPH European Conference on
Visual Media Production
Improving Dynamic HDR Imaging with Fusion Transformer
Reconstructing a High Dynamic Range (HDR) image from several Low Dynamic Range (LDR) images with different exposures is a challenging task, especially in the presence of camera and object motion. Though existing models using convolutional neural networks (CNNs) have made great progress, challenges still exist, e.g., ghosting artifacts. Transformers, originating from the field of natural language processing, have shown success in computer vision tasks, due to their ability to address a large receptive field even within a single layer. In this paper, we propose a transformer model for HDR imaging. Our pipeline includes three steps: alignment, fusion, and reconstruction. The key component is the HDR transformer module. Through experiments and ablation studies, we demonstrate that our model outperforms the state-of-the-art by large margins on several popular public datasets
DeepHS-HDRVideo: Deep High Speed High Dynamic Range Video Reconstruction
Due to hardware constraints, standard off-the-shelf digital cameras suffers
from low dynamic range (LDR) and low frame per second (FPS) outputs. Previous
works in high dynamic range (HDR) video reconstruction uses sequence of
alternating exposure LDR frames as input, and align the neighbouring frames
using optical flow based networks. However, these methods often result in
motion artifacts in challenging situations. This is because, the alternate
exposure frames have to be exposure matched in order to apply alignment using
optical flow. Hence, over-saturation and noise in the LDR frames results in
inaccurate alignment. To this end, we propose to align the input LDR frames
using a pre-trained video frame interpolation network. This results in better
alignment of LDR frames, since we circumvent the error-prone exposure matching
step, and directly generate intermediate missing frames from the same exposure
inputs. Furthermore, it allows us to generate high FPS HDR videos by
recursively interpolating the intermediate frames. Through this work, we
propose to use video frame interpolation for HDR video reconstruction, and
present the first method to generate high FPS HDR videos. Experimental results
demonstrate the efficacy of the proposed framework against optical flow based
alignment methods, with an absolute improvement of 2.4 PSNR value on standard
HDR video datasets [1], [2] and further benchmark our method for high FPS HDR
video generation.Comment: ICPR 202
- โฆ