3,925 research outputs found

    DeepToF: Off-the-shelf real-time correction of multipath interference in time-of-flight imaging

    Get PDF
    Time-of-flight (ToF) imaging has become a widespread technique for depth estimation, allowing affordable off-the-shelf cameras to provide depth maps in real time. However, multipath interference (MPI) resulting from indirect illumination significantly degrades the captured depth. Most previous works have tried to solve this problem by means of complex hardware modifications or costly computations. In this work, we avoid these approaches and propose a new technique to correct errors in depth caused by MPI, which requires no camera modifications and takes just 10 milliseconds per frame. Our observations about the nature of MPI suggest that most of its information is available in image space; this allows us to formulate the depth imaging process as a spatially-varying convolution and use a convolutional neural network to correct MPI errors. Since the input and output data present similar structure, we base our network on an autoencoder, which we train in two stages. First, we use the encoder (convolution filters) to learn a suitable basis to represent MPI-corrupted depth images; then, we train the decoder (deconvolution filters) to correct depth from synthetic scenes, generated by using a physically-based, time-resolved renderer. This approach allows us to tackle a key problem in ToF, the lack of ground-truth data, by using a large-scale captured training set with MPI-corrupted depth to train the encoder, and a smaller synthetic training set with ground truth depth to train the decoder stage of the network. We demonstrate and validate our method on both synthetic and real complex scenarios, using an off-the-shelf ToF camera, and with only the captured, incorrect depth as input

    ์ ๋Œ€์  ์ƒ์„ฑ ์‹ ๊ฒฝ๋ง์„ ํ™œ์šฉํ•œ ์‹ค์˜์ƒ ์žก์Œ ์ œ๊ฑฐ ๊ธฐ๋ฒ•

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (์„์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ •๋ณด๊ณตํ•™๋ถ€, 2021. 2. ์ด๊ฒฝ๋ฌด.Learning-based image denoising models have been bounded to situations where well-aligned noisy and clean images are given, or training samples can be synthesized from predetermined noise models. While recent generative methods introduce a methodology to accurately simulate the unknown distribution of real-world noise, several limitations still exist. The existing methods are restrained to the case that unrealistic assumptions are made, or the data of actual noise distribution is available. In a real situation, a noise generator should learn to simulate the general and complex noise distribution without using paired noisy and clean images. As a noise generator learned for the real situation tends to fail to express complex noise maps and fits to generate specific texture patterns, we propose an architecture designed to resolve this problem. Therefore, we introduce the C2N, a Clean-to-Noisy image generation framework, to imitate complex real-world noise without using any paired examples. Our C2N combined with a conventional denoising model outperforms existing unsupervised methods on a challenging real-world denoising benchmark by a large margin, validating the effectiveness of the proposed formulation. We also extend our method to a practical situation when there are several data constraints, an area not previously explored by the previous generative noise modeling methods.ํ•™์Šต ๊ธฐ๋ฐ˜ ์˜์ƒ ์žก์Œ ์ œ๊ฑฐ ๋ชจ๋ธ์˜ ์‚ฌ์šฉ์€, ์žก์Œ์ด ์žˆ๋Š” ์ด๋ฏธ์ง€๋“ค๊ณผ ๊นจ๋—ํ•œ ์ด๋ฏธ์ง€๋“ค์ด ์ž˜ ์ •๋ ฌ๋œ ์Œ์„ ์ด๋ฃฌ ์ƒํƒœ๋กœ ์ œ๊ณต๋˜๊ฑฐ๋‚˜, ์ฃผ์–ด์ง„ ์žก์Œ์˜ ๋ถ„ํฌ๋กœ๋ถ€ํ„ฐ ํ•™์Šต์šฉ ์ƒ˜ํ”Œ๋“ค์„ ํ•ฉ์„ฑํ•  ์ˆ˜ ์žˆ๋Š” ์ƒํ™ฉ์— ํ•œ์ •๋˜์–ด ์žˆ๋‹ค. ์ตœ๊ทผ์˜ ์ƒ์„ฑ๋ชจ๋ธ ๊ธฐ๋ฐ˜์˜ ๋ฐฉ๋ฒ•๋“ค์€ ์‹ค์ œ ์žก์Œ์˜ ๋ถ„ํฌ๊ฐ€ ์•Œ๋ ค์ง€์ง€ ์•Š์€ ๊ฒฝ์šฐ์—๋„ ๊ทธ๊ฒƒ์„ ์ •ํ™•ํ•˜๊ฒŒ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ํ•˜๋Š” ๋ฐฉ๋ฒ•๋ก ์„ ๋„์ž…ํ•˜๊ณ  ์žˆ์ง€๋งŒ, ๋ช‡ ๊ฐ€์ง€ ์ œํ•œ์ ๋“ค์ด ์—ฌ์ „ํžˆ ์กด์žฌํ•œ๋‹ค. ๊ธฐ์กด์˜ ๊ทธ๋Ÿฌํ•œ ๋ฐฉ๋ฒ•๋“ค์€ ์‹ค์ œ ์žก์Œ์˜ ๋ถ„ํฌ๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ๋Š” ๋ฐ์ดํ„ฐ๊ฐ€ ์ฃผ์–ด์ง€๊ฑฐ๋‚˜ ์žก์Œ์— ๋Œ€ํ•ด ๋น„ํ˜„์‹ค์ ์ธ ๊ฐ€์ •์ด ๋‚ด๋ ค์ง„ ๊ฒฝ์šฐ๋กœ ์ ์šฉ ๋ฒ”์œ„๊ฐ€ ์ œํ•œ๋˜์—ˆ๋‹ค. ์‹ค์ œ ์ƒํ™ฉ์—์„œ์˜ ์žก์Œ ์ƒ์„ฑ๋ชจ๋ธ์€ ์žก์Œ์ด ์žˆ๋Š” ์ด๋ฏธ์ง€์™€ ๊นจ๋—ํ•œ ์ด๋ฏธ์ง€์˜ ์Œ์„ ์‚ฌ์šฉํ•˜์ง€ ์•Š๊ณ ๋„ ๋ณต์žกํ•˜๋ฉฐ ์ผ๋ฐ˜์ ์ธ ์žก์Œ์˜ ์‹œ๋ฎฌ๋ ˆ์ด์…˜์„ ํ•™์Šตํ•  ์ˆ˜ ์žˆ์–ด์•ผ ํ•œ๋‹ค. ์ด๋Ÿฌํ•œ ์‹ค์ œ์  ์ƒํ™ฉ์—์„œ ํ•™์Šตํ•œ ์žก์Œ ์ƒ์„ฑ๋ชจ๋ธ์€ ๋ณต์žกํ•œ ์žก์Œ์˜ ๋ถ„ํฌ๊ฐ€ ์•„๋‹Œ ํŠน์ • ์งˆ๊ฐ์˜ ํŒจํ„ด์„ ๋งŒ๋“ค์–ด๋‚ด๋Š” ๋™์ž‘์„ ํ•˜๊ฒŒ ๋˜์–ด๋ฒ„๋ฆฌ๊ธฐ ์‰ฝ๊ธฐ์—, ์ด ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ์„ค๊ณ„ํ•œ ๋ชจ๋ธ ๊ตฌ์กฐ๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ์ด๋ ‡๊ฒŒ ์„ค๊ณ„ํ•œ, C2N ์ฆ‰ Clean-to-Noisy ์˜์ƒ ์ƒ์„ฑ ํ”„๋ ˆ์ž„์›Œํฌ๋ฅผ ๊ฐœ๋ฐœํ•˜์—ฌ ๋ณต์žกํ•œ ์‹ค์˜์ƒ์˜ ์žก์Œ์„ ์–ด๋– ํ•œ ์Œ์„ ์ด๋ฃฌ ํ•™์Šต ๋ฐ์ดํ„ฐ ์—†์ด ๋ชจ๋ฐฉํ•  ์ˆ˜ ์žˆ๋‹ค. ์ด C2N์„ ๊ธฐ์กด์˜ ์žก์Œ ์ œ๊ฑฐ ๋ชจ๋ธ๊ณผ ๊ฒฐํ•ฉํ•˜๋Š” ๊ฒƒ์œผ๋กœ ์‹ค์˜์ƒ ์žก์Œ ์ œ๊ฑฐ ๋ฒค์น˜๋งˆํฌ์—์„œ ๊ธฐ์กด์˜ ๋น„๊ฐ๋… ํ•™์Šต ๋ฐฉ๋ฒ•๋“ค์„ ํฐ ํญ์œผ๋กœ ๋Šฅ๊ฐ€ํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ด๋ฅผ ํ†ตํ•ด ์ œ์•ˆ ๋ฐฉ๋ฒ•์˜ ํšจ๊ณผ๋ฅผ ๊ฒ€์ฆํ•œ๋‹ค. ๋˜ํ•œ ์ด์ „์˜ ์žก์Œ ์ƒ์„ฑ๋ชจ๋ธ ๋ฐฉ๋ฒ•๋“ค์— ์˜ํ•ด์„  ํƒ๊ตฌ๋˜์ง€ ์•Š์•˜๋˜ ์˜์—ญ์ธ, ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•œ ์—ฌ๋Ÿฌ ์ œ์•ฝ์ด ์žˆ๋Š” ์‹ค์šฉ์  ์ƒํ™ฉ์— ๋Œ€ํ•ด ๋ณธ ๋ฐฉ๋ฒ•์„ ํ™•์žฅํ•œ๋‹ค.Abstract - i Contents - ii List of Tables - iv List of Figures - v 1 INTRODUCTION 1 2 RELATED WORK 5 2.1 Deep Image Denoising 5 2.2 Deep Denoising of Real-World Noise 5 3 C2N: Clean-to-Noisy Image Generation Framework - 8 3.1 Complexity of Real-World Noise 8 3.2 Learning to Generate Pseudo-Noisy Images 9 3.3 C2N Architecture 12 3.3.1 Signal-Independent Pixel-Wise Transforms 12 3.3.2 Signal-Dependent Sampling and Transforms 12 3.3.3 Spatially Correlated Transforms 13 3.3.4 Discriminator 14 3.4 Learning to Denoise with the Generated Pairs 14 4 Experiment 16 4.1 Experimental Setup 16 4.1.1 Dataset 16 4.1.2 Implementation Details and Optimization 17 4.2 Model Analysis 17 4.3 Results on Real-World Noise 23 4.4 Performance Under Practical Data Constraints 26 4.5 Generating noise by interpolation in latent space 30 4.6 Verifying C2N in Denoiser Training 31 5 Conclusion 33 Abstract (In Korean) 40 Acknowlegement 41Maste
    • โ€ฆ
    corecore