229 research outputs found

    HDRFusion:HDR SLAM using a low-cost auto-exposure RGB-D sensor

    Get PDF
    We describe a new method for comparing frame appearance in a frame-to-model 3-D mapping and tracking system using an low dynamic range (LDR) RGB-D camera which is robust to brightness changes caused by auto exposure. It is based on a normalised radiance measure which is invariant to exposure changes and not only robustifies the tracking under changing lighting conditions, but also enables the following exposure compensation perform accurately to allow online building of high dynamic range (HDR) maps. The latter facilitates the frame-to-model tracking to minimise drift as well as better capturing light variation within the scene. Results from experiments with synthetic and real data demonstrate that the method provides both improved tracking and maps with far greater dynamic range of luminosity.Comment: 14 page

    LAN-HDR: Luminance-based Alignment Network for High Dynamic Range Video Reconstruction

    Full text link
    As demands for high-quality videos continue to rise, high-resolution and high-dynamic range (HDR) imaging techniques are drawing attention. To generate an HDR video from low dynamic range (LDR) images, one of the critical steps is the motion compensation between LDR frames, for which most existing works employed the optical flow algorithm. However, these methods suffer from flow estimation errors when saturation or complicated motions exist. In this paper, we propose an end-to-end HDR video composition framework, which aligns LDR frames in the feature space and then merges aligned features into an HDR frame, without relying on pixel-domain optical flow. Specifically, we propose a luminance-based alignment network for HDR (LAN-HDR) consisting of an alignment module and a hallucination module. The alignment module aligns a frame to the adjacent reference by evaluating luminance-based attention, excluding color information. The hallucination module generates sharp details, especially for washed-out areas due to saturation. The aligned and hallucinated features are then blended adaptively to complement each other. Finally, we merge the features to generate a final HDR frame. In training, we adopt a temporal loss, in addition to frame reconstruction losses, to enhance temporal consistency and thus reduce flickering. Extensive experiments demonstrate that our method performs better or comparable to state-of-the-art methods on several benchmarks.Comment: ICCV 202

    νŠΉμ§• ν˜Όν•© λ„€νŠΈμ›Œν¬λ₯Ό μ΄μš©ν•œ μ˜μƒ μ •ν•© 기법과 κ³  λͺ…μ•”λΉ„ μ˜μƒλ²• 및 λΉ„λ””μ˜€ κ³  ν•΄μƒν™”μ—μ„œμ˜ μ‘μš©

    Get PDF
    ν•™μœ„λ…Όλ¬Έ (박사) -- μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› : κ³΅κ³ΌλŒ€ν•™ 전기·컴퓨터곡학뢀, 2020. 8. 쑰남읡.This dissertation presents a deep end-to-end network for high dynamic range (HDR) imaging of dynamic scenes with background and foreground motions. Generating an HDR image from a sequence of multi-exposure images is a challenging process when the images have misalignments by being taken in a dynamic situation. Hence, recent methods first align the multi-exposure images to the reference by using patch matching, optical flow, homography transformation, or attention module before the merging. In this dissertation, a deep network that synthesizes the aligned images as a result of blending the information from multi-exposure images is proposed, because explicitly aligning photos with different exposures is inherently a difficult problem. Specifically, the proposed network generates under/over-exposure images that are structurally aligned to the reference, by blending all the information from the dynamic multi-exposure images. The primary idea is that blending two images in the deep-feature-domain is effective for synthesizing multi-exposure images that are structurally aligned to the reference, resulting in better-aligned images than the pixel-domain blending or geometric transformation methods. Specifically, the proposed alignment network consists of a two-way encoder for extracting features from two images separately, several convolution layers for blending deep features, and a decoder for constructing the aligned images. The proposed network is shown to generate the aligned images with a wide range of exposure differences very well and thus can be effectively used for the HDR imaging of dynamic scenes. Moreover, by adding a simple merging network after the alignment network and training the overall system end-to-end, a performance gain compared to the recent state-of-the-art methods is obtained. This dissertation also presents a deep end-to-end network for video super-resolution (VSR) of frames with motions. To reconstruct an HR frame from a sequence of adjacent frames is a challenging process when the images have misalignments. Hence, recent methods first align the adjacent frames to the reference by using optical flow or adding spatial transformer network (STN). In this dissertation, a deep network that synthesizes the aligned frames as a result of blending the information from adjacent frames is proposed, because explicitly aligning frames is inherently a difficult problem. Specifically, the proposed network generates adjacent frames that are structurally aligned to the reference, by blending all the information from the neighbor frames. The primary idea is that blending two images in the deep-feature-domain is effective for synthesizing frames that are structurally aligned to the reference, resulting in better-aligned images than the pixel-domain blending or geometric transformation methods. Specifically, the proposed alignment network consists of a two-way encoder for extracting features from two images separately, several convolution layers for blending deep features, and a decoder for constructing the aligned images. The proposed network is shown to generate the aligned frames very well and thus can be effectively used for the VSR. Moreover, by adding a simple reconstruction network after the alignment network and training the overall system end-to-end, A performance gain compared to the recent state-of-the-art methods is obtained. In addition to each HDR imaging and VSR network, this dissertation presents a deep end-to-end network for joint HDR-SR of dynamic scenes with background and foreground motions. The proposed HDR imaging and VSR networks enhace the dynamic range and the resolution of images, respectively. However, they can be enhanced simultaneously by a single network. In this dissertation, the network which has same structure of the proposed VSR network is proposed. The network is shown to reconstruct the final results which have higher dynamic range and resolution. It is compared with several methods designed with existing HDR imaging and VSR networks, and shows both qualitatively and quantitatively better results.λ³Έ ν•™μœ„λ…Όλ¬Έμ€ λ°°κ²½ 및 μ „κ²½μ˜ μ›€μ§μž„μ΄ μžˆλŠ” μƒν™©μ—μ„œ κ³  λͺ…μ•”λΉ„ μ˜μƒλ²•μ„ μœ„ν•œ λ”₯ λŸ¬λ‹ λ„€νŠΈμ›Œν¬λ₯Ό μ œμ•ˆν•œλ‹€. μ›€μ§μž„μ΄ μžˆλŠ” μƒν™©μ—μ„œ 촬영된 λ…ΈμΆœμ΄ λ‹€λ₯Έ μ—¬λŸ¬ 영 상듀을 μ΄μš©ν•˜μ—¬ κ³  λͺ…μ•”λΉ„ μ˜μƒμ„ μƒμ„±ν•˜λŠ” 것은 맀우 μ–΄λ €μš΄ μž‘μ—…μ΄λ‹€. κ·Έλ ‡κΈ° λ•Œλ¬Έμ—, μ΅œκ·Όμ— μ œμ•ˆλœ 방법듀은 이미지듀을 ν•©μ„±ν•˜κΈ° 전에 패치 맀칭, μ˜΅ν‹°μ»¬ ν”Œλ‘œμš°, 호λͺ¨κ·Έλž˜ν”Ό λ³€ν™˜ 등을 μ΄μš©ν•˜μ—¬ κ·Έ 이미지듀을 λ¨Όμ € μ •λ ¬ν•œλ‹€. μ‹€μ œλ‘œ λ…ΈμΆœ 정도가 λ‹€λ₯Έ μ—¬λŸ¬ 이미지듀을 μ •λ ¬ν•˜λŠ” 것은 μ•„μ£Ό μ–΄λ €μš΄ μž‘μ—…μ΄κΈ° λ•Œλ¬Έμ—, 이 λ…Όλ¬Έμ—μ„œλŠ” μ—¬λŸ¬ μ΄λ―Έμ§€λ“€λ‘œλΆ€ν„° 얻은 정보λ₯Ό μ„žμ–΄μ„œ μ •λ ¬λœ 이미지λ₯Ό ν•©μ„±ν•˜λŠ” λ„€νŠΈμ›Œν¬λ₯Ό μ œμ•ˆν•œλ‹€. 특히, μ œμ•ˆν•˜λŠ” λ„€νŠΈμ›Œν¬λŠ” 더 밝게 ν˜Ήμ€ μ–΄λ‘‘κ²Œ 촬영된 이미지듀을 쀑간 밝기둜 촬영된 이미지λ₯Ό κΈ°μ€€μœΌλ‘œ μ •λ ¬ν•œλ‹€. μ£Όμš”ν•œ μ•„μ΄λ””μ–΄λŠ” μ •λ ¬λœ 이미지λ₯Ό ν•©μ„±ν•  λ•Œ νŠΉμ§• λ„λ©”μΈμ—μ„œ ν•©μ„±ν•˜λŠ” 것이며, μ΄λŠ” ν”½μ…€ λ„λ©”μΈμ—μ„œ ν•©μ„±ν•˜κ±°λ‚˜ κΈ°ν•˜ν•™μ  λ³€ν™˜μ„ μ΄μš©ν•  λ•Œ 보닀 더 쒋은 μ •λ ¬ κ²°κ³Όλ₯Ό κ°–λŠ”λ‹€. 특히, μ œμ•ˆν•˜λŠ” μ •λ ¬ λ„€νŠΈμ›Œν¬λŠ” 두 갈래의 인코더와 μ»¨λ³Όλ£¨μ…˜ λ ˆμ΄μ–΄λ“€ 그리고 λ””μ½”λ”λ‘œ 이루어져 μžˆλ‹€. 인코더듀은 두 μž…λ ₯ μ΄λ―Έμ§€λ‘œλΆ€ν„° νŠΉμ§•μ„ μΆ”μΆœν•˜κ³ , μ»¨λ³Όλ£¨μ…˜ λ ˆμ΄μ–΄λ“€μ΄ 이 νŠΉμ§•λ“€μ„ μ„žλŠ”λ‹€. λ§ˆμ§€λ§‰μœΌλ‘œ λ””μ½”λ”μ—μ„œ μ •λ ¬λœ 이미지λ₯Ό μƒμ„±ν•œλ‹€. μ œμ•ˆν•˜λŠ” λ„€νŠΈμ›Œν¬λŠ” κ³  λͺ…μ•”λΉ„ μ˜μƒλ²•μ—μ„œ μ‚¬μš©λ  수 μžˆλ„λ‘ λ…ΈμΆœ 정도가 크게 μ°¨μ΄λ‚˜λŠ” μ˜μƒμ—μ„œλ„ 잘 μž‘λ™ν•œλ‹€. κ²Œλ‹€κ°€, κ°„λ‹¨ν•œ 병합 λ„€νŠΈμ›Œν¬λ₯Ό μΆ”κ°€ν•˜κ³  전체 λ„€νŠΈμ›Œν¬λ“€μ„ ν•œ λ²ˆμ— ν•™μŠ΅ν•¨μœΌλ‘œμ„œ, μ΅œκ·Όμ— μ œμ•ˆλœ 방법듀 보닀 더 쒋은 μ„±λŠ₯을 κ°–λŠ”λ‹€. λ˜ν•œ, λ³Έ ν•™μœ„λ…Όλ¬Έμ€ λ™μ˜μƒ λ‚΄ ν”„λ ˆμž„λ“€μ„ μ΄μš©ν•˜λŠ” λΉ„λ””μ˜€ κ³  해상화 방법을 μœ„ν•œ λ”₯ λŸ¬λ‹ λ„€νŠΈμ›Œν¬λ₯Ό μ œμ•ˆν•œλ‹€. λ™μ˜μƒ λ‚΄ μΈμ ‘ν•œ ν”„λ ˆμž„λ“€ μ‚¬μ΄μ—λŠ” μ›€μ§μž„μ΄ μ‘΄μž¬ν•˜κΈ° λ•Œλ¬Έμ—, 이듀을 μ΄μš©ν•˜μ—¬ κ³  ν•΄μƒλ„μ˜ ν”„λ ˆμž„μ„ ν•©μ„±ν•˜λŠ” 것은 μ•„μ£Ό μ–΄λ €μš΄ μž‘μ—…μ΄λ‹€. λ”°λΌμ„œ, μ΅œκ·Όμ— μ œμ•ˆλœ 방법듀은 이 μΈμ ‘ν•œ ν”„λ ˆμž„λ“€μ„ μ •λ ¬ν•˜κΈ° μœ„ν•΄ μ˜΅ν‹°μ»¬ ν”Œλ‘œμš°λ₯Ό κ³„μ‚°ν•˜κ±°λ‚˜ STN을 μΆ”κ°€ν•œλ‹€. μ›€μ§μž„μ΄ μ‘΄μž¬ν•˜λŠ” ν”„λ ˆμž„λ“€μ„ μ •λ ¬ν•˜λŠ” 것은 μ–΄λ €μš΄ 과정이기 λ•Œλ¬Έμ—, 이 λ…Όλ¬Έμ—μ„œλŠ” μΈμ ‘ν•œ ν”„λ ˆμž„λ“€λ‘œλΆ€ν„° 얻은 정보λ₯Ό μ„žμ–΄μ„œ μ •λ ¬λœ ν”„λ ˆμž„μ„ ν•©μ„±ν•˜λŠ” λ„€νŠΈμ›Œν¬λ₯Ό μ œμ•ˆν•œλ‹€. 특히, μ œμ•ˆν•˜λŠ” λ„€νŠΈμ›Œν¬λŠ” μ΄μ›ƒν•œ ν”„λ ˆμž„λ“€μ„ λͺ©ν‘œ ν”„λ ˆμž„μ„ κΈ°μ€€μœΌλ‘œ μ •λ ¬ν•œλ‹€. λ§ˆμ°¬κ°€μ§€λ‘œ μ£Όμš” μ•„μ΄λ””μ–΄λŠ” μ •λ ¬λœ ν”„λ ˆμž„μ„ ν•©μ„±ν•  λ•Œ νŠΉμ§• λ„λ©”μΈμ—μ„œ ν•©μ„±ν•˜λŠ” 것이닀. μ΄λŠ” ν”½μ…€ λ„λ©”μΈμ—μ„œ ν•©μ„±ν•˜κ±°λ‚˜ κΈ°ν•˜ν•™μ  λ³€ν™˜μ„ μ΄μš©ν•  λ•Œ 보닀 더 쒋은 μ •λ ¬ κ²°κ³Όλ₯Ό κ°–λŠ”λ‹€. 특히, μ œμ•ˆν•˜λŠ” μ •λ ¬ λ„€νŠΈμ›Œν¬λŠ” 두 갈래의 인코더와 μ»¨λ³Όλ£¨μ…˜ λ ˆμ΄μ–΄λ“€ 그리고 λ””μ½”λ”λ‘œ 이루어져 μžˆλ‹€. 인코더듀은 두 μž…λ ₯ ν”„λ ˆμž„μœΌλ‘œλΆ€ν„° νŠΉμ§•μ„ μΆ”μΆœν•˜κ³ , μ»¨λ³Όλ£¨μ…˜ λ ˆμ΄μ–΄λ“€μ΄ 이 νŠΉμ§•λ“€μ„ μ„žλŠ”λ‹€. λ§ˆμ§€λ§‰μœΌλ‘œ λ””μ½”λ”μ—μ„œ μ •λ ¬λœ ν”„λ ˆμž„μ„ μƒμ„±ν•œλ‹€. μ œμ•ˆν•˜λŠ” λ„€νŠΈμ›Œν¬λŠ” μΈμ ‘ν•œ ν”„λ ˆμž„λ“€μ„ 잘 μ •λ ¬ν•˜λ©°, λΉ„λ””μ˜€ κ³  해상화에 효과적으둜 μ‚¬μš©λ  수 μžˆλ‹€. κ²Œλ‹€κ°€ 병합 λ„€νŠΈμ›Œν¬λ₯Ό μΆ”κ°€ν•˜κ³  전체 λ„€νŠΈμ›Œν¬λ“€μ„ ν•œ λ²ˆμ— ν•™μŠ΅ν•¨μœΌλ‘œμ„œ, μ΅œκ·Όμ— μ œμ•ˆλœ μ—¬λŸ¬ 방법듀 보닀 더 쒋은 μ„±λŠ₯을 κ°–λŠ”λ‹€. κ³  λͺ…μ•”λΉ„ μ˜μƒλ²•κ³Ό λΉ„λ””μ˜€ κ³  해상화에 λ”ν•˜μ—¬, λ³Έ ν•™μœ„λ…Όλ¬Έμ€ λͺ…암비와 해상도λ₯Ό ν•œ λ²ˆμ— ν–₯μƒμ‹œν‚€λŠ” λ”₯ λ„€νŠΈμ›Œν¬λ₯Ό μ œμ•ˆν•œλ‹€. μ•žμ—μ„œ μ œμ•ˆλœ 두 λ„€νŠΈμ›Œν¬λ“€μ€ 각각 λͺ…암비와 해상도λ₯Ό ν–₯μƒμ‹œν‚¨λ‹€. ν•˜μ§€λ§Œ, 그듀은 ν•˜λ‚˜μ˜ λ„€νŠΈμ›Œν¬λ₯Ό 톡해 ν•œ λ²ˆμ— ν–₯상될 수 μžˆλ‹€. 이 λ…Όλ¬Έμ—μ„œλŠ” λΉ„λ””μ˜€ 고해상화λ₯Ό μœ„ν•΄ μ œμ•ˆν•œ λ„€νŠΈμ›Œν¬μ™€ 같은 ꡬ쑰의 λ„€νŠΈμ›Œν¬λ₯Ό μ΄μš©ν•˜λ©°, 더 높은 λͺ…암비와 해상도λ₯Ό κ°–λŠ” μ΅œμ’… κ²°κ³Όλ₯Ό 생성해낼 수 μžˆλ‹€. 이 방법은 기쑴의 κ³  λͺ…μ•”λΉ„ μ˜μƒλ²•κ³Ό λΉ„λ””μ˜€ 고해상화λ₯Ό μœ„ν•œ λ„€νŠΈμ›Œν¬λ“€μ„ μ‘°ν•©ν•˜λŠ” 것 보닀 μ •μ„±μ μœΌλ‘œ 그리고 μ •λŸ‰μ μœΌλ‘œ 더 쒋은 κ²°κ³Όλ₯Ό λ§Œλ“€μ–΄ λ‚Έλ‹€.1 Introduction 1 2 Related Work 7 2.1 High Dynamic Range Imaging 7 2.1.1 Rejecting Regions with Motions 7 2.1.2 Alignment Before Merging 8 2.1.3 Patch-based Reconstruction 9 2.1.4 Deep-learning-based Methods 9 2.1.5 Single-Image HDRI 10 2.2 Video Super-resolution 11 2.2.1 Deep Single Image Super-resolution 11 2.2.2 Deep Video Super-resolution 12 3 High Dynamic Range Imaging 13 3.1 Motivation 13 3.2 Proposed Method 14 3.2.1 Overall Pipeline 14 3.2.2 Alignment Network 15 3.2.3 Merging Network 19 3.2.4 Integrated HDR imaging network 20 3.3 Datasets 21 3.3.1 Kalantari Dataset and Ground Truth Aligned Images 21 3.3.2 Preprocessing 21 3.3.3 Patch Generation 22 3.4 Experimental Results 23 3.4.1 Evaluation Metrics 23 3.4.2 Ablation Studies 23 3.4.3 Comparisons with State-of-the-Art Methods 25 3.4.4 Application to the Case of More Numbers of Exposures 29 3.4.5 Pre-processing for other HDR imaging methods 32 4 Video Super-resolution 36 4.1 Motivation 36 4.2 Proposed Method 37 4.2.1 Overall Pipeline 37 4.2.2 Alignment Network 38 4.2.3 Reconstruction Network 40 4.2.4 Integrated VSR network 42 4.3 Experimental Results 42 4.3.1 Dataset 42 4.3.2 Ablation Study 42 4.3.3 Capability of DSBN for alignment 44 4.3.4 Comparisons with State-of-the-Art Methods 45 5 Joint HDR and SR 51 5.1 Proposed Method 51 5.1.1 Feature Blending Network 51 5.1.2 Joint HDR-SR Network 51 5.1.3 Existing VSR Network 52 5.1.4 Existing HDR Network 53 5.2 Experimental Results 53 6 Conclusion 58 Abstract (In Korean) 71Docto

    High-Brightness Image Enhancement Algorithm

    Get PDF
    In this paper, we introduce a tone mapping algorithm for processing high-brightness video images. This method can maximally recover the information of high-brightness areas and preserve detailed information. Along with benchmark data, real-life and practical application data were taken to test the proposed method. The experimental objects were license plates. We reconstructed the image in the RGB channel, and gamma correction was carried out. After that, local linear adjustment was completed through a tone mapping window to restore the detailed information of the high-brightness region. The experimental results showed that our algorithm could clearly restore the details of high-brightness local areas. The processed image conformed to the visual effect observed by human eyes but with higher definition. Compared with other algorithms, the proposed algorithm has advantages in terms of both subjective and objective evaluation. It can fully satisfy the needs in various practical applications

    PΓ΅hjalik uuring ΓΌlisuure dΓΌnaamilise ulatusega piltide toonivastendamisest koos subjektiivsete testidega

    Get PDF
    A high dynamic range (HDR) image has a very wide range of luminance levels that traditional low dynamic range (LDR) displays cannot visualize. For this reason, HDR images are usually transformed to 8-bit representations, so that the alpha channel for each pixel is used as an exponent value, sometimes referred to as exponential notation [43]. Tone mapping operators (TMOs) are used to transform high dynamic range to low dynamic range domain by compressing pixels so that traditional LDR display can visualize them. The purpose of this thesis is to identify and analyse differences and similarities between the wide range of tone mapping operators that are available in the literature. Each TMO has been analyzed using subjective studies considering different conditions, which include environment, luminance, and colour. Also, several inverse tone mapping operators, HDR mappings with exposure fusion, histogram adjustment, and retinex have been analysed in this study. 19 different TMOs have been examined using a variety of HDR images. Mean opinion score (MOS) is calculated on those selected TMOs by asking the opinion of 25 independent people considering candidates’ age, vision, and colour blindness
    • …
    corecore