200 research outputs found

    GlowGAN: Unsupervised Learning of HDR Images from LDR Images in the Wild

    Full text link
    Most in-the-wild images are stored in Low Dynamic Range (LDR) form, serving as a partial observation of the High Dynamic Range (HDR) visual world. Despite limited dynamic range, these LDR images are often captured with different exposures, implicitly containing information about the underlying HDR image distribution. Inspired by this intuition, in this work we present, to the best of our knowledge, the first method for learning a generative model of HDR images from in-the-wild LDR image collections in a fully unsupervised manner. The key idea is to train a generative adversarial network (GAN) to generate HDR images which, when projected to LDR under various exposures, are indistinguishable from real LDR images. The projection from HDR to LDR is achieved via a camera model that captures the stochasticity in exposure and camera response function. Experiments show that our method GlowGAN can synthesize photorealistic HDR images in many challenging cases such as landscapes, lightning, or windows, where previous supervised generative models produce overexposed images. We further demonstrate the new application of unsupervised inverse tone mapping (ITM) enabled by GlowGAN. Our ITM method does not need HDR images or paired multi-exposure images for training, yet it reconstructs more plausible information for overexposed regions than state-of-the-art supervised learning models trained on such data

    GlowGAN: Unsupervised Learning of HDR Images from LDR Images in the Wild

    Get PDF
    Most in-the-wild images are stored in Low Dynamic Range (LDR) form, servingas a partial observation of the High Dynamic Range (HDR) visual world. Despitelimited dynamic range, these LDR images are often captured with differentexposures, implicitly containing information about the underlying HDR imagedistribution. Inspired by this intuition, in this work we present, to the bestof our knowledge, the first method for learning a generative model of HDRimages from in-the-wild LDR image collections in a fully unsupervised manner.The key idea is to train a generative adversarial network (GAN) to generate HDRimages which, when projected to LDR under various exposures, areindistinguishable from real LDR images. The projection from HDR to LDR isachieved via a camera model that captures the stochasticity in exposure andcamera response function. Experiments show that our method GlowGAN cansynthesize photorealistic HDR images in many challenging cases such aslandscapes, lightning, or windows, where previous supervised generative modelsproduce overexposed images. We further demonstrate the new application ofunsupervised inverse tone mapping (ITM) enabled by GlowGAN. Our ITM method doesnot need HDR images or paired multi-exposure images for training, yet itreconstructs more plausible information for overexposed regions thanstate-of-the-art supervised learning models trained on such data.<br

    Towards Efficient SDRTV-to-HDRTV by Learning from Image Formation

    Full text link
    Modern displays are capable of rendering video content with high dynamic range (HDR) and wide color gamut (WCG). However, the majority of available resources are still in standard dynamic range (SDR). As a result, there is significant value in transforming existing SDR content into the HDRTV standard. In this paper, we define and analyze the SDRTV-to-HDRTV task by modeling the formation of SDRTV/HDRTV content. Our analysis and observations indicate that a naive end-to-end supervised training pipeline suffers from severe gamut transition errors. To address this issue, we propose a novel three-step solution pipeline called HDRTVNet++, which includes adaptive global color mapping, local enhancement, and highlight refinement. The adaptive global color mapping step uses global statistics as guidance to perform image-adaptive color mapping. A local enhancement network is then deployed to enhance local details. Finally, we combine the two sub-networks above as a generator and achieve highlight consistency through GAN-based joint training. Our method is primarily designed for ultra-high-definition TV content and is therefore effective and lightweight for processing 4K resolution images. We also construct a dataset using HDR videos in the HDR10 standard, named HDRTV1K that contains 1235 and 117 training images and 117 testing images, all in 4K resolution. Besides, we select five metrics to evaluate the results of SDRTV-to-HDRTV algorithms. Our final results demonstrate state-of-the-art performance both quantitatively and visually. The code, model and dataset are available at https://github.com/xiaom233/HDRTVNet-plus.Comment: Extended version of HDRTVNe

    JSI-GAN: GAN-Based Joint Super-Resolution and Inverse Tone-Mapping with Pixel-Wise Task-Specific Filters for UHD HDR Video

    Full text link
    Joint learning of super-resolution (SR) and inverse tone-mapping (ITM) has been explored recently, to convert legacy low resolution (LR) standard dynamic range (SDR) videos to high resolution (HR) high dynamic range (HDR) videos for the growing need of UHD HDR TV/broadcasting applications. However, previous CNN-based methods directly reconstruct the HR HDR frames from LR SDR frames, and are only trained with a simple L2 loss. In this paper, we take a divide-and-conquer approach in designing a novel GAN-based joint SR-ITM network, called JSI-GAN, which is composed of three task-specific subnets: an image reconstruction subnet, a detail restoration (DR) subnet and a local contrast enhancement (LCE) subnet. We delicately design these subnets so that they are appropriately trained for the intended purpose, learning a pair of pixel-wise 1D separable filters via the DR subnet for detail restoration and a pixel-wise 2D local filter by the LCE subnet for contrast enhancement. Moreover, to train the JSI-GAN effectively, we propose a novel detail GAN loss alongside the conventional GAN loss, which helps enhancing both local details and contrasts to reconstruct high quality HR HDR results. When all subnets are jointly trained well, the predicted HR HDR results of higher quality are obtained with at least 0.41 dB gain in PSNR over those generated by the previous methods.Comment: The first two authors contributed equally to this work. Accepted at AAAI 2020. (Camera-ready version

    ๋‹ค์ค‘ ๋…ธ์ถœ ์ž…๋ ฅ์˜ ํ”ผ์ณ ๋ถ„ํ•ด๋ฅผ ํ†ตํ•œ ํ•˜์ด ๋‹ค์ด๋‚˜๋ฏน ๋ ˆ์ธ์ง€ ์˜์ƒ ์ƒ์„ฑ ๋ฐฉ๋ฒ•

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(์„์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ํ˜‘๋™๊ณผ์ • ์ธ๊ณต์ง€๋Šฅ์ „๊ณต, 2022. 8. ์กฐ๋‚จ์ต.Multi-exposure high dynamic range (HDR) imaging aims to generate an HDR image from multiple differently exposed low dynamic range (LDR) images. Multi-exposure HDR imaging is a challenging task due to two major problems. One is misalignments among the input LDR images, which can cause ghosting artifacts on result HDR, and the other is missing information on LDR images due to under-/over-exposed region. Although previous methods tried to align input LDR images with traditional methods(e.g., homography, optical flow), they still suffer undesired artifacts on the result HDR image due to estimation errors that occurred in aligning step. In this dissertation, disentangled feature-guided HDR network (DFGNet) is proposed to alleviate the above-stated problems. Specifically, exposure features and spatial features are first extracted from input LDR images, and they are disentangled from each other. Then, these features are processed through the proposed DFG modules, which produce a high-quality HDR image. The proposed DFGNet shows outstanding performance compared to previous methods, achieving the PSNR-โ„“ of 41.89dB and the PSNR-ฮผ of 44.19dB.๋‹ค์ค‘ ๋…ธ์ถœ(Multiple-exposure) ํ•˜์ด ๋‹ค์ด๋‚˜๋ฏน ๋ ˆ์ธ์ง€(High Dynamic Range, HDR) ์ด๋ฏธ์ง•์€ ๊ฐ๊ฐ ๋‹ค๋ฅธ ๋…ธ์ถœ ์ •๋„๋กœ ์ดฌ์˜๋œ ๋‹ค์ˆ˜์˜ ๋กœ์šฐ ๋‹ค์ด๋‚˜๋ฏน ๋ ˆ์ธ์ง€(Low Dynamic Range, LDR) ์ด๋ฏธ์ง€๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ•˜๋‚˜์˜ HDR ์ด๋ฏธ์ง€๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๊ฒƒ์„ ๋ชฉํ‘œ๋กœ ํ•œ๋‹ค. ๋‹ค์ค‘ ๋…ธ์ถœ HDR ์ด๋ฏธ์ง•์€ ๋‘ ๊ฐ€์ง€ ์ฃผ์š” ๋ฌธ์ œ์  ๋•Œ๋ฌธ์— ์–ด๋ ค์›€์ด ์žˆ๋Š”๋ฐ, ํ•˜๋‚˜๋Š” ์ž…๋ ฅ LDR ์ด๋ฏธ์ง€๋“ค์ด ์ •๋ ฌ๋˜์ง€ ์•Š์•„ ๊ฒฐ๊ณผ HDR ์ด๋ฏธ์ง€์—์„œ ๊ณ ์ŠคํŠธ ์•„ํ‹ฐํŒฉํŠธ(Ghosting Artifact)๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ์ ๊ณผ, ๋˜ ๋‹ค๋ฅธ ํ•˜๋‚˜๋Š” LDR ์ด๋ฏธ์ง€๋“ค์˜ ๊ณผ์†Œ๋…ธ์ถœ(Under-exposure) ๋ฐ ๊ณผ๋‹ค๋…ธ์ถœ(Over-exposure) ๋œ ์˜์—ญ์—์„œ ์ •๋ณด ์†์‹ค์ด ๋ฐœ์ƒํ•œ๋‹ค๋Š” ์ ์ด๋‹ค. ๊ณผ๊ฑฐ์˜ ๋ฐฉ๋ฒ•๋“ค์ด ๊ณ ์ „์ ์ธ ์ด๋ฏธ์ง€ ์ •๋ ฌ ๋ฐฉ๋ฒ•๋“ค(e.g., homography, optical flow)์„ ์‚ฌ์šฉํ•˜์—ฌ ์ž…๋ ฅ LDR ์ด๋ฏธ์ง€๋“ค์„ ์ „์ฒ˜๋ฆฌ ๊ณผ์ •์—์„œ ์ •๋ ฌํ•˜ ์—ฌ ๋ณ‘ํ•ฉํ•˜๋Š” ์‹œ๋„๋ฅผ ํ–ˆ์ง€๋งŒ, ์ด ๊ณผ์ •์—์„œ ๋ฐœ์ƒํ•˜๋Š” ์ถ”์ • ์˜ค๋ฅ˜๋กœ ์ธํ•ด ์ดํ›„ ๋‹จ๊ณ„์— ์•…์˜ํ•ญ์„ ๋ฏธ์นจ์œผ๋กœ์จ ๋ฐœ์ƒํ•˜๋Š” ์—ฌ๋Ÿฌ๊ฐ€์ง€ ๋ถ€์ ์ ˆํ•œ ์•„ํ‹ฐํŒฉํŠธ๋“ค์ด ๊ฒฐ๊ณผ HDR ์ด๋ฏธ์ง€์—์„œ ๋‚˜ํƒ€๋‚˜๊ณ  ์žˆ๋‹ค. ๋ณธ ์‹ฌ์‚ฌ์—์„œ๋Š” ํ”ผ์ณ ๋ถ„ํ•ด๋ฅผ ์‘์šฉํ•œ HDR ๋„คํŠธ์›Œํฌ๋ฅผ ์ œ์•ˆํ•˜์—ฌ, ์–ธ๊ธ‰๋œ ๋ฌธ์ œ๋“ค์„ ๊ฒฝ๊ฐํ•˜๊ณ ์ž ํ•œ๋‹ค. ๊ตฌ์ฒด์ ์œผ๋กœ, ๋จผ์ € LDR ์ด๋ฏธ์ง€๋“ค์„ ๋…ธ์ถœ ํ”ผ์ณ์™€ ๊ณต๊ฐ„ ํ”ผ์ณ๋กœ ๋ถ„ํ•ดํ•˜๊ณ , ๋ถ„ํ•ด๋œ ํ”ผ์ณ๋ฅผ HDR ๋„คํŠธ์›Œํฌ์—์„œ ํ™œ์šฉํ•จ์œผ๋กœ์จ ๊ณ ํ’ˆ์งˆ์˜ HDR ์ด๋ฏธ์ง€ ๋ฅผ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•œ๋‹ค. ์ œ์•ˆํ•œ ๋„คํŠธ์›Œํฌ๋Š” ์„ฑ๋Šฅ ์ง€ํ‘œ์ธ PSNR-โ„“๊ณผ PSNR-ฮผ์—์„œ ๊ฐ๊ฐ 41.89dB, 44.19dB์˜ ์„ฑ๋Šฅ์„ ๋‹ฌ์„ฑํ•จ์œผ๋กœ์จ, ๊ธฐ์กด ๋ฐฉ๋ฒ•๋“ค๋ณด๋‹ค ์šฐ์ˆ˜ํ•จ์„ ์ž…์ฆํ•œ๋‹ค.1 Introduction 1 2 Related Works 4 2.1 Single-frame HDR imaging 4 2.2 Multi-frame HDR imaging with dynamic scenes 6 3 Proposed Method 10 3.1 Disentangle Network for Feature Extraction 10 3.2 Disentangle Features Guided Network 16 4 Experimental Results 22 4.1 Implementation and Details 22 4.2 Comparison with State-of-the-art Methods 22 5 Ablation Study 30 5.1 Impact of Proposed Modules 30 6 Conclusion 32 Abstract (In Korean) 39์„
    • โ€ฆ
    corecore