7 research outputs found

    Self-Supervised Spatially Variant PSF Estimation for Aberration-Aware Depth-from-Defocus

    Full text link
    In this paper, we address the task of aberration-aware depth-from-defocus (DfD), which takes account of spatially variant point spread functions (PSFs) of a real camera. To effectively obtain the spatially variant PSFs of a real camera without requiring any ground-truth PSFs, we propose a novel self-supervised learning method that leverages the pair of real sharp and blurred images, which can be easily captured by changing the aperture setting of the camera. In our PSF estimation, we assume rotationally symmetric PSFs and introduce the polar coordinate system to more accurately learn the PSF estimation network. We also handle the focus breathing phenomenon that occurs in real DfD situations. Experimental results on synthetic and real data demonstrate the effectiveness of our method regarding both the PSF estimation and the depth estimation

    λ“€μ–Ό ν”½μ…€ 이미지 기반 μ œλ‘œμƒ· λ””ν¬μ»€μŠ€ λ””λΈ”λŸ¬λ§

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(석사) -- μ„œμšΈλŒ€ν•™κ΅λŒ€ν•™μ› : κ³΅κ³ΌλŒ€ν•™ ν˜‘λ™κ³Όμ • 인곡지λŠ₯전곡, 2022. 8. ν•œλ³΄ν˜•.Defocus deblurring in dual-pixel (DP) images is a challenging problem due to diverse camera optics and scene structures. Most of the existing algorithms rely on supervised learning approaches trained on the Canon DSLR dataset but often suffer from weak generalizability to out-of-distribution images including the ones captured by smartphones. We propose a novel zero-shot defocus deblurring algorithm, which only requires a pair of DP images without any training data and a pre-calibrated ground-truth blur kernel. Specifically, our approach first initializes a sharp latent map using a parametric blur kernel with a symmetry constraint. It then uses a convolutional neural network (CNN) to estimate the defocus map that best describes the observed DP image. Finally, it employs a generative model to learn scene-specific non-uniform blur kernels to compute the final enhanced images. We demonstrate that the proposed unsupervised technique outperforms the counterparts based on supervised learning when training and testing run in different datasets. We also present that our model achieves competitive accuracy when tested on in-distribution data.λ“€μ–Ό ν”½μ…€(DP) 이미지 μ„Όμ„œλ₯Ό μ‚¬μš©ν•˜λŠ” μŠ€λ§ˆνŠΈν°μ—μ„œμ˜ Defocus Blur ν˜„μƒμ€ λ‹€μ–‘ν•œ 카메라 κ΄‘ν•™ ꡬ쑰와 물체의 깊이 λ§ˆλ‹€ λ‹€λ₯Έ 흐릿함 μ •λ„λ‘œ 인해 원 μ˜μƒ 볡원이 쉽지 μ•ŠμŠ΅λ‹ˆλ‹€. κΈ°μ‘΄ μ•Œκ³ λ¦¬μ¦˜λ“€μ€ λͺ¨λ‘ Canon DSLR λ°μ΄ν„°μ—μ„œ ν›ˆλ ¨λœ 지도 ν•™μŠ΅ μ ‘κ·Ό 방식에 μ˜μ‘΄ν•˜μ—¬ 슀마트폰으둜 촬영된 μ‚¬μ§„μ—μ„œλŠ” 잘 μΌλ°˜ν™”κ°€ λ˜μ§€ μ•ŠμŠ΅λ‹ˆλ‹€. λ³Έ λ…Όλ¬Έμ—μ„œλŠ” ν›ˆλ ¨ 데이터와 사전 λ³΄μ •λœ μ‹€μ œ Blur 컀널 없이도, ν•œ 쌍의 DP μ‚¬μ§„λ§ŒμœΌλ‘œλ„ ν•™μŠ΅μ΄ κ°€λŠ₯ν•œ Zero-shot Defocus Deblurring μ•Œκ³ λ¦¬μ¦˜μ„ μ œμ•ˆν•©λ‹ˆλ‹€. 특히, λ³Έ λ…Όλ¬Έμ—μ„œλŠ” λŒ€μΉ­μ μœΌλ‘œ λͺ¨λΈλ§ 된 Blur Kernel을 μ‚¬μš©ν•˜μ—¬ 초기 μ˜μƒμ„ λ³΅μ›ν•˜λ©°, 이후 CNN(Convolutional Neural Network)을 μ‚¬μš©ν•˜μ—¬ κ΄€μ°°λœ DP 이미지λ₯Ό κ°€μž₯ 잘 μ„€λͺ…ν•˜λŠ” Defocus Map을 μΆ”μ •ν•©λ‹ˆλ‹€. λ§ˆμ§€λ§‰μœΌλ‘œ CNN을 μ‚¬μš©ν•˜μ—¬ μž₯λ©΄ 별 Non-uniformν•œ Blur Kernel을 ν•™μŠ΅ν•˜μ—¬ μ΅œμ’… 볡원 μ˜μƒμ˜ μ„±λŠ₯을 κ°œμ„ ν•©λ‹ˆλ‹€. ν•™μŠ΅κ³Ό 좔둠이 λ‹€λ₯Έ 데이터 μ„ΈνŠΈμ—μ„œ 싀행될 λ•Œ, μ œμ•ˆλœ 방법은 비지도 기술 μž„μ—λ„ λΆˆκ΅¬ν•˜κ³  μ΅œκ·Όμ— λ°œν‘œλœ 지도 ν•™μŠ΅μ„ 기반의 방법듀보닀 μš°μˆ˜ν•œ μ„±λŠ₯을 λ³΄μ—¬μ€λ‹ˆλ‹€. λ˜ν•œ ν•™μŠ΅ 된 것과 같은 뢄포 λ‚΄ λ°μ΄ν„°μ—μ„œ μΆ”λ‘ ν•  λ•Œλ„ 지도 ν•™μŠ΅ 기반의 방법듀과 μ •λŸ‰μ  λ˜λŠ” μ •μ„±μ μœΌλ‘œ λΉ„μŠ·ν•œ μ„±λŠ₯을 λ³΄μ΄λŠ” 것을 확인할 수 μžˆμ—ˆμŠ΅λ‹ˆλ‹€.1. Introduction 6 1.1. Background 6 1.2. Overview 9 1.3. Contribution 11 2. Related Works 12 2.1.Defocus Deblurring 12 2.2.Defocus Map 13 2.3.Multiplane Image Representation 14 2.4.DP Blur Kernel 14 3. Proposed Methods 16 3.1. Latent Map Initialization 17 3.2. Defocus Map Estimation 20 3.3. Learning Blur Kernel s 22 3.4. Implementation Details 25 4. Experiments 28 4.1. Dataset 28 4.2. Quantitative Results 29 4.3. Qualitative Results 31 5. Conclusions 37 5.1.Summary 37 5.2. Discussion 38석
    corecore