11 research outputs found

    Fully-automatic inverse tone mapping algorithm based on dynamic mid-level tone mapping

    Get PDF
    High Dynamic Range (HDR) displays can show images with higher color contrast levels and peak luminosities than the common Low Dynamic Range (LDR) displays. However, most existing video content is recorded and/or graded in LDR format. To show LDR content on HDR displays, it needs to be up-scaled using a so-called inverse tone mapping algorithm. Several techniques for inverse tone mapping have been proposed in the last years, going from simple approaches based on global and local operators to more advanced algorithms such as neural networks. Some of the drawbacks of existing techniques for inverse tone mapping are the need for human intervention, the high computation time for more advanced algorithms, limited low peak brightness, and the lack of the preservation of the artistic intentions. In this paper, we propose a fully-automatic inverse tone mapping operator based on mid-level mapping capable of real-time video processing. Our proposed algorithm allows expanding LDR images into HDR images with peak brightness over 1000 nits, preserving the artistic intentions inherent to the HDR domain. We assessed our results using the full-reference objective quality metrics HDR-VDP-2.2 and DRIM, and carrying out a subjective pair-wise comparison experiment. We compared our results with those obtained with the most recent methods found in the literature. Experimental results demonstrate that our proposed method outperforms the current state-of-the-art of simple inverse tone mapping methods and its performance is similar to other more complex and time-consuming advanced techniques

    Subjective Annotation for a Frame Interpolation Benchmark using Artefact Amplification

    Get PDF
    Current benchmarks for optical flow algorithms evaluate the estimation either directly by comparing the predicted flow fields with the ground truth or indirectly by using the predicted flow fields for frame interpolation and then comparing the interpolated frames with the actual frames. In the latter case, objective quality measures such as the mean squared error are typically employed. However, it is well known that for image quality assessment, the actual quality experienced by the user cannot be fully deduced from such simple measures. Hence, we conducted a subjective quality assessment crowdscouring study for the interpolated frames provided by one of the optical flow benchmarks, the Middlebury benchmark. We collected forced-choice paired comparisons between interpolated images and corresponding ground truth. To increase the sensitivity of observers when judging minute difference in paired comparisons we introduced a new method to the field of full-reference quality assessment, called artefact amplification. From the crowdsourcing data, we reconstructed absolute quality scale values according to Thurstone's model. As a result, we obtained a re-ranking of the 155 participating algorithms w.r.t. the visual quality of the interpolated frames. This re-ranking not only shows the necessity of visual quality assessment as another evaluation metric for optical flow and frame interpolation benchmarks, the results also provide the ground truth for designing novel image quality assessment (IQA) methods dedicated to perceptual quality of interpolated images. As a first step, we proposed such a new full-reference method, called WAE-IQA. By weighing the local differences between an interpolated image and its ground truth WAE-IQA performed slightly better than the currently best FR-IQA approach from the literature.Comment: arXiv admin note: text overlap with arXiv:1901.0536

    Fully-automatic inverse tone mapping preserving the content creator's artistic intentions

    Get PDF
    High Dynamic Range (HDR) displays can show images with higher color contrast levels and peak luminosities than the common Low Dynamic Range (LDR) displays. However, most existing video content is recorded and/or graded in LDR format. To show this LDR content on HDR displays, a dynamic range expansion by using an Inverse Tone Mapped Operator (iTMO) is required. In addition to requiring human intervention for tuning, most of the iTMOs don't consider artistic intentions inherent to the HDR domain. Furthermore, the quality of their results decays with peak brightness above 1000 nits. In this paper, we propose a fully-automatic inverse tone mapping operator based on mid-level mapping. This allows expanding LDR images into HDR with peak brightness over 1000 nits, preserving the artistic intentions inherent to the HDR domain. We assessed our results using full-reference objective quality metrics as HDR-VDP-2.2 and DRIM. Experimental results demonstrate that our proposed method outperforms the current state of the art

    특징 혼합 네트워크를 이용한 영상 정합 기법과 고 명암비 영상법 및 비디오 고 해상화에서의 응용

    Get PDF
    학위논문 (박사) -- 서울대학교 대학원 : 공과대학 전기·컴퓨터공학부, 2020. 8. 조남익.This dissertation presents a deep end-to-end network for high dynamic range (HDR) imaging of dynamic scenes with background and foreground motions. Generating an HDR image from a sequence of multi-exposure images is a challenging process when the images have misalignments by being taken in a dynamic situation. Hence, recent methods first align the multi-exposure images to the reference by using patch matching, optical flow, homography transformation, or attention module before the merging. In this dissertation, a deep network that synthesizes the aligned images as a result of blending the information from multi-exposure images is proposed, because explicitly aligning photos with different exposures is inherently a difficult problem. Specifically, the proposed network generates under/over-exposure images that are structurally aligned to the reference, by blending all the information from the dynamic multi-exposure images. The primary idea is that blending two images in the deep-feature-domain is effective for synthesizing multi-exposure images that are structurally aligned to the reference, resulting in better-aligned images than the pixel-domain blending or geometric transformation methods. Specifically, the proposed alignment network consists of a two-way encoder for extracting features from two images separately, several convolution layers for blending deep features, and a decoder for constructing the aligned images. The proposed network is shown to generate the aligned images with a wide range of exposure differences very well and thus can be effectively used for the HDR imaging of dynamic scenes. Moreover, by adding a simple merging network after the alignment network and training the overall system end-to-end, a performance gain compared to the recent state-of-the-art methods is obtained. This dissertation also presents a deep end-to-end network for video super-resolution (VSR) of frames with motions. To reconstruct an HR frame from a sequence of adjacent frames is a challenging process when the images have misalignments. Hence, recent methods first align the adjacent frames to the reference by using optical flow or adding spatial transformer network (STN). In this dissertation, a deep network that synthesizes the aligned frames as a result of blending the information from adjacent frames is proposed, because explicitly aligning frames is inherently a difficult problem. Specifically, the proposed network generates adjacent frames that are structurally aligned to the reference, by blending all the information from the neighbor frames. The primary idea is that blending two images in the deep-feature-domain is effective for synthesizing frames that are structurally aligned to the reference, resulting in better-aligned images than the pixel-domain blending or geometric transformation methods. Specifically, the proposed alignment network consists of a two-way encoder for extracting features from two images separately, several convolution layers for blending deep features, and a decoder for constructing the aligned images. The proposed network is shown to generate the aligned frames very well and thus can be effectively used for the VSR. Moreover, by adding a simple reconstruction network after the alignment network and training the overall system end-to-end, A performance gain compared to the recent state-of-the-art methods is obtained. In addition to each HDR imaging and VSR network, this dissertation presents a deep end-to-end network for joint HDR-SR of dynamic scenes with background and foreground motions. The proposed HDR imaging and VSR networks enhace the dynamic range and the resolution of images, respectively. However, they can be enhanced simultaneously by a single network. In this dissertation, the network which has same structure of the proposed VSR network is proposed. The network is shown to reconstruct the final results which have higher dynamic range and resolution. It is compared with several methods designed with existing HDR imaging and VSR networks, and shows both qualitatively and quantitatively better results.본 학위논문은 배경 및 전경의 움직임이 있는 상황에서 고 명암비 영상법을 위한 딥 러닝 네트워크를 제안한다. 움직임이 있는 상황에서 촬영된 노출이 다른 여러 영 상들을 이용하여 고 명암비 영상을 생성하는 것은 매우 어려운 작업이다. 그렇기 때문에, 최근에 제안된 방법들은 이미지들을 합성하기 전에 패치 매칭, 옵티컬 플로우, 호모그래피 변환 등을 이용하여 그 이미지들을 먼저 정렬한다. 실제로 노출 정도가 다른 여러 이미지들을 정렬하는 것은 아주 어려운 작업이기 때문에, 이 논문에서는 여러 이미지들로부터 얻은 정보를 섞어서 정렬된 이미지를 합성하는 네트워크를 제안한다. 특히, 제안하는 네트워크는 더 밝게 혹은 어둡게 촬영된 이미지들을 중간 밝기로 촬영된 이미지를 기준으로 정렬한다. 주요한 아이디어는 정렬된 이미지를 합성할 때 특징 도메인에서 합성하는 것이며, 이는 픽셀 도메인에서 합성하거나 기하학적 변환을 이용할 때 보다 더 좋은 정렬 결과를 갖는다. 특히, 제안하는 정렬 네트워크는 두 갈래의 인코더와 컨볼루션 레이어들 그리고 디코더로 이루어져 있다. 인코더들은 두 입력 이미지로부터 특징을 추출하고, 컨볼루션 레이어들이 이 특징들을 섞는다. 마지막으로 디코더에서 정렬된 이미지를 생성한다. 제안하는 네트워크는 고 명암비 영상법에서 사용될 수 있도록 노출 정도가 크게 차이나는 영상에서도 잘 작동한다. 게다가, 간단한 병합 네트워크를 추가하고 전체 네트워크들을 한 번에 학습함으로서, 최근에 제안된 방법들 보다 더 좋은 성능을 갖는다. 또한, 본 학위논문은 동영상 내 프레임들을 이용하는 비디오 고 해상화 방법을 위한 딥 러닝 네트워크를 제안한다. 동영상 내 인접한 프레임들 사이에는 움직임이 존재하기 때문에, 이들을 이용하여 고 해상도의 프레임을 합성하는 것은 아주 어려운 작업이다. 따라서, 최근에 제안된 방법들은 이 인접한 프레임들을 정렬하기 위해 옵티컬 플로우를 계산하거나 STN을 추가한다. 움직임이 존재하는 프레임들을 정렬하는 것은 어려운 과정이기 때문에, 이 논문에서는 인접한 프레임들로부터 얻은 정보를 섞어서 정렬된 프레임을 합성하는 네트워크를 제안한다. 특히, 제안하는 네트워크는 이웃한 프레임들을 목표 프레임을 기준으로 정렬한다. 마찬가지로 주요 아이디어는 정렬된 프레임을 합성할 때 특징 도메인에서 합성하는 것이다. 이는 픽셀 도메인에서 합성하거나 기하학적 변환을 이용할 때 보다 더 좋은 정렬 결과를 갖는다. 특히, 제안하는 정렬 네트워크는 두 갈래의 인코더와 컨볼루션 레이어들 그리고 디코더로 이루어져 있다. 인코더들은 두 입력 프레임으로부터 특징을 추출하고, 컨볼루션 레이어들이 이 특징들을 섞는다. 마지막으로 디코더에서 정렬된 프레임을 생성한다. 제안하는 네트워크는 인접한 프레임들을 잘 정렬하며, 비디오 고 해상화에 효과적으로 사용될 수 있다. 게다가 병합 네트워크를 추가하고 전체 네트워크들을 한 번에 학습함으로서, 최근에 제안된 여러 방법들 보다 더 좋은 성능을 갖는다. 고 명암비 영상법과 비디오 고 해상화에 더하여, 본 학위논문은 명암비와 해상도를 한 번에 향상시키는 딥 네트워크를 제안한다. 앞에서 제안된 두 네트워크들은 각각 명암비와 해상도를 향상시킨다. 하지만, 그들은 하나의 네트워크를 통해 한 번에 향상될 수 있다. 이 논문에서는 비디오 고해상화를 위해 제안한 네트워크와 같은 구조의 네트워크를 이용하며, 더 높은 명암비와 해상도를 갖는 최종 결과를 생성해낼 수 있다. 이 방법은 기존의 고 명암비 영상법과 비디오 고해상화를 위한 네트워크들을 조합하는 것 보다 정성적으로 그리고 정량적으로 더 좋은 결과를 만들어 낸다.1 Introduction 1 2 Related Work 7 2.1 High Dynamic Range Imaging 7 2.1.1 Rejecting Regions with Motions 7 2.1.2 Alignment Before Merging 8 2.1.3 Patch-based Reconstruction 9 2.1.4 Deep-learning-based Methods 9 2.1.5 Single-Image HDRI 10 2.2 Video Super-resolution 11 2.2.1 Deep Single Image Super-resolution 11 2.2.2 Deep Video Super-resolution 12 3 High Dynamic Range Imaging 13 3.1 Motivation 13 3.2 Proposed Method 14 3.2.1 Overall Pipeline 14 3.2.2 Alignment Network 15 3.2.3 Merging Network 19 3.2.4 Integrated HDR imaging network 20 3.3 Datasets 21 3.3.1 Kalantari Dataset and Ground Truth Aligned Images 21 3.3.2 Preprocessing 21 3.3.3 Patch Generation 22 3.4 Experimental Results 23 3.4.1 Evaluation Metrics 23 3.4.2 Ablation Studies 23 3.4.3 Comparisons with State-of-the-Art Methods 25 3.4.4 Application to the Case of More Numbers of Exposures 29 3.4.5 Pre-processing for other HDR imaging methods 32 4 Video Super-resolution 36 4.1 Motivation 36 4.2 Proposed Method 37 4.2.1 Overall Pipeline 37 4.2.2 Alignment Network 38 4.2.3 Reconstruction Network 40 4.2.4 Integrated VSR network 42 4.3 Experimental Results 42 4.3.1 Dataset 42 4.3.2 Ablation Study 42 4.3.3 Capability of DSBN for alignment 44 4.3.4 Comparisons with State-of-the-Art Methods 45 5 Joint HDR and SR 51 5.1 Proposed Method 51 5.1.1 Feature Blending Network 51 5.1.2 Joint HDR-SR Network 51 5.1.3 Existing VSR Network 52 5.1.4 Existing HDR Network 53 5.2 Experimental Results 53 6 Conclusion 58 Abstract (In Korean) 71Docto

    Deep HDR hallucination for inverse tone mapping

    Get PDF
    Inverse Tone Mapping (ITM) methods attempt to reconstruct High Dynamic Range (HDR) information from Low Dynamic Range (LDR) image content. The dynamic range of well-exposed areas must be expanded and any missing information due to over/under-exposure must be recovered (hallucinated). The majority of methods focus on the former and are relatively successful, while most attempts on the latter are not of sufficient quality, even ones based on Convolutional Neural Networks (CNNs). A major factor for the reduced inpainting quality in some works is the choice of loss function. Work based on Generative Adversarial Networks (GANs) shows promising results for image synthesis and LDR inpainting, suggesting that GAN losses can improve inverse tone mapping results. This work presents a GAN-based method that hallucinates missing information from badly exposed areas in LDR images and compares its efficacy with alternative variations. The proposed method is quantitatively competitive with state-of-the-art inverse tone mapping methods, providing good dynamic range expansion for well-exposed areas and plausible hallucinations for saturated and under-exposed areas. A density-based normalisation method, targeted for HDR content, is also proposed, as well as an HDR data augmentation method targeted for HDR hallucination

    Evaluation of Dynamic Range Reconstruction Approaches and a Mobile Application for HDR Photo Capture

    Get PDF
    Digital photography became widespread with the global use of smartphones. However, most of the captured images do not fully use the camera capabilities by storing the captured photos in a format with limited dynamic range. The subject of dynamic range expansion and reconstruction has been researched since early 2000s and recently gave rise to several new reconstruction methods using convolutional neural networks (CNNs), whose performance has not yet been comprehensively compared. By implementing and using our dynamic range reconstruction evaluation framework we compare the reconstruction quality of individual CNN-based approaches. We also implement a mobile HDR camera application and evaluate the feasibility of running the best-performing reconstruction method directly on a mobile device.Použití digitální fotografie se velmi rozšířilo s popularitou chytrých telefonů. Většina pořízených fotografií však nevyužívá plně možností fotoaparátu, protože zachycené obrázky jsou ukládány ve formátu s omezeným rozsahem hodnot jasu. Problematika expanze a rekonstrukce dynamického rozsahu je zkoumána již od začátku 21. století, nově byly zveřejněny rekonstrukční metody používající konvoluční neuronové sítě, jejichž kvalita výstupu dosud nebyla dostatečně porovnána. V této práci jsme navrhli a implementovali framework pro porovnání kvality rekonstrukce, který jsme použili ke srovnání rekonstrukčních metod založených na konvolučních neuronových sítích. Také jsme implementovali aplikaci fotoaparátu pro mobilní zařízení umožňující zachycení vysokého rozsahu hodnot jasu a dále jsme zhodnotili praktičnost provádění rekonstrukce dynamického rozsahu pomocí nejlepší z porovnaných metod, přímo na mobilním zařízení.Katedra softwaru a výuky informatikyDepartment of Software and Computer Science EducationMatematicko-fyzikální fakultaFaculty of Mathematics and Physic

    Dynamic Range Expansion Based on Image Statistics

    Get PDF
    As the dynamic range of displays keeps increasing, there is a need for reverse tone mapping methods, which aim at expanding the dynamic range of legacy low dynamic range images for viewing on higher dynamic range displays. While a number of strategies have been proposed, most of them are designed for well-exposed input images and are not optimal when dealing with ill-exposed (under- or over-exposed) content. Further, this type of content is more prone to artifacts which may arise when using local methods. In this work, we build on an existing, automatic, global reverse tone mapping operator based on a gamma expansion. We improve this method by providing a new way for automatic parameter calculation from the image statistics. We show that this method yields better results across the whole range of exposures
    corecore