51 research outputs found

    Spatio-Temporal Deformable Attention Network for Video Deblurring

    Full text link
    The key success factor of the video deblurring methods is to compensate for the blurry pixels of the mid-frame with the sharp pixels of the adjacent video frames. Therefore, mainstream methods align the adjacent frames based on the estimated optical flows and fuse the alignment frames for restoration. However, these methods sometimes generate unsatisfactory results because they rarely consider the blur levels of pixels, which may introduce blurry pixels from video frames. Actually, not all the pixels in the video frames are sharp and beneficial for deblurring. To address this problem, we propose the spatio-temporal deformable attention network (STDANet) for video delurring, which extracts the information of sharp pixels by considering the pixel-wise blur levels of the video frames. Specifically, STDANet is an encoder-decoder network combined with the motion estimator and spatio-temporal deformable attention (STDA) module, where motion estimator predicts coarse optical flows that are used as base offsets to find the corresponding sharp pixels in STDA module. Experimental results indicate that the proposed STDANet performs favorably against state-of-the-art methods on the GoPro, DVD, and BSD datasets.Comment: ECCV 202

    MC-Blur: A Comprehensive Benchmark for Image Deblurring

    Full text link
    Blur artifacts can seriously degrade the visual quality of images, and numerous deblurring methods have been proposed for specific scenarios. However, in most real-world images, blur is caused by different factors, e.g., motion and defocus. In this paper, we address how different deblurring methods perform in the case of multiple types of blur. For in-depth performance evaluation, we construct a new large-scale multi-cause image deblurring dataset (called MC-Blur), including real-world and synthesized blurry images with mixed factors of blurs. The images in the proposed MC-Blur dataset are collected using different techniques: averaging sharp images captured by a 1000-fps high-speed camera, convolving Ultra-High-Definition (UHD) sharp images with large-size kernels, adding defocus to images, and real-world blurry images captured by various camera models. Based on the MC-Blur dataset, we conduct extensive benchmarking studies to compare SOTA methods in different scenarios, analyze their efficiency, and investigate the built dataset's capacity. These benchmarking results provide a comprehensive overview of the advantages and limitations of current deblurring methods, and reveal the advances of our dataset

    Event-guided Multi-patch Network with Self-supervision for Non-uniform Motion Deblurring

    Full text link
    Contemporary deep learning multi-scale deblurring models suffer from many issues: 1) They perform poorly on non-uniformly blurred images/videos; 2) Simply increasing the model depth with finer-scale levels cannot improve deblurring; 3) Individual RGB frames contain a limited motion information for deblurring; 4) Previous models have a limited robustness to spatial transformations and noise. Below, we extend the DMPHN model by several mechanisms to address the above issues: I) We present a novel self-supervised event-guided deep hierarchical Multi-patch Network (MPN) to deal with blurry images and videos via fine-to-coarse hierarchical localized representations; II) We propose a novel stacked pipeline, StackMPN, to improve the deblurring performance under the increased network depth; III) We propose an event-guided architecture to exploit motion cues contained in videos to tackle complex blur in videos; IV) We propose a novel self-supervised step to expose the model to random transformations (rotations, scale changes), and make it robust to Gaussian noises. Our MPN achieves the state of the art on the GoPro and VideoDeblur datasets with a 40x faster runtime compared to current multi-scale methods. With 30ms to process an image at 1280x720 resolution, it is the first real-time deep motion deblurring model for 720p images at 30fps. For StackMPN, we obtain significant improvements over 1.2dB on the GoPro dataset by increasing the network depth. Utilizing the event information and self-supervision further boost results to 33.83dB.Comment: International Journal of Computer Vision. arXiv admin note: substantial text overlap with arXiv:1904.0346

    BANet: Blur-aware Attention Networks for Dynamic Scene Deblurring

    Full text link
    Image motion blur usually results from moving objects or camera shakes. Such blur is generally directional and non-uniform. Previous research efforts attempt to solve non-uniform blur by using self-recurrent multi-scale or multi-patch architectures accompanying with self-attention. However, using self-recurrent frameworks typically leads to a longer inference time, while inter-pixel or inter-channel self-attention may cause excessive memory usage. This paper proposes blur-aware attention networks (BANet) that accomplish accurate and efficient deblurring via a single forward pass. Our BANet utilizes region-based self-attention with multi-kernel strip pooling to disentangle blur patterns of different degrees and with cascaded parallel dilated convolution to aggregate multi-scale content features. Extensive experimental results on the GoPro and HIDE benchmarks demonstrate that the proposed BANet performs favorably against the state-of-the-art in blurred image restoration and can provide deblurred results in real-time

    A deep learning framework for quality assessment and restoration in video endoscopy

    Full text link
    Endoscopy is a routine imaging technique used for both diagnosis and minimally invasive surgical treatment. Artifacts such as motion blur, bubbles, specular reflections, floating objects and pixel saturation impede the visual interpretation and the automated analysis of endoscopy videos. Given the widespread use of endoscopy in different clinical applications, we contend that the robust and reliable identification of such artifacts and the automated restoration of corrupted video frames is a fundamental medical imaging problem. Existing state-of-the-art methods only deal with the detection and restoration of selected artifacts. However, typically endoscopy videos contain numerous artifacts which motivates to establish a comprehensive solution. We propose a fully automatic framework that can: 1) detect and classify six different primary artifacts, 2) provide a quality score for each frame and 3) restore mildly corrupted frames. To detect different artifacts our framework exploits fast multi-scale, single stage convolutional neural network detector. We introduce a quality metric to assess frame quality and predict image restoration success. Generative adversarial networks with carefully chosen regularization are finally used to restore corrupted frames. Our detector yields the highest mean average precision (mAP at 5% threshold) of 49.0 and the lowest computational time of 88 ms allowing for accurate real-time processing. Our restoration models for blind deblurring, saturation correction and inpainting demonstrate significant improvements over previous methods. On a set of 10 test videos we show that our approach preserves an average of 68.7% which is 25% more frames than that retained from the raw videos.Comment: 14 page

    New Datasets, Models, and Optimization

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ •๋ณด๊ณตํ•™๋ถ€, 2021.8. ์†ํ˜„ํƒœ.์‚ฌ์ง„ ์ดฌ์˜์˜ ๊ถ๊ทน์ ์ธ ๋ชฉํ‘œ๋Š” ๊ณ ํ’ˆ์งˆ์˜ ๊นจ๋—ํ•œ ์˜์ƒ์„ ์–ป๋Š” ๊ฒƒ์ด๋‹ค. ํ˜„์‹ค์ ์œผ๋กœ, ์ผ์ƒ์˜ ์‚ฌ์ง„์€ ์ž์ฃผ ํ”๋“ค๋ฆฐ ์นด๋ฉ”๋ผ์™€ ์›€์ง์ด๋Š” ๋ฌผ์ฒด๊ฐ€ ์žˆ๋Š” ๋™์  ํ™˜๊ฒฝ์—์„œ ์ฐ๋Š”๋‹ค. ๋…ธ์ถœ์‹œ๊ฐ„ ์ค‘์˜ ์นด๋ฉ”๋ผ์™€ ํ”ผ์‚ฌ์ฒด๊ฐ„์˜ ์ƒ๋Œ€์ ์ธ ์›€์ง์ž„์€ ์‚ฌ์ง„๊ณผ ๋™์˜์ƒ์—์„œ ๋ชจ์…˜ ๋ธ”๋Ÿฌ๋ฅผ ์ผ์œผํ‚ค๋ฉฐ ์‹œ๊ฐ์ ์ธ ํ™”์งˆ์„ ์ €ํ•˜์‹œํ‚จ๋‹ค. ๋™์  ํ™˜๊ฒฝ์—์„œ ๋ธ”๋Ÿฌ์˜ ์„ธ๊ธฐ์™€ ์›€์ง์ž„์˜ ๋ชจ์–‘์€ ๋งค ์ด๋ฏธ์ง€๋งˆ๋‹ค, ๊ทธ๋ฆฌ๊ณ  ๋งค ํ”ฝ์…€๋งˆ๋‹ค ๋‹ค๋ฅด๋‹ค. ๊ตญ์ง€์ ์œผ๋กœ ๋ณ€ํ™”ํ•˜๋Š” ๋ธ”๋Ÿฌ์˜ ์„ฑ์งˆ์€ ์‚ฌ์ง„๊ณผ ๋™์˜์ƒ์—์„œ์˜ ๋ชจ์…˜ ๋ธ”๋Ÿฌ ์ œ๊ฑฐ๋ฅผ ์‹ฌ๊ฐํ•˜๊ฒŒ ํ’€๊ธฐ ์–ด๋ ค์šฐ๋ฉฐ ํ•ด๋‹ต์ด ํ•˜๋‚˜๋กœ ์ •ํ•ด์ง€์ง€ ์•Š์€, ์ž˜ ์ •์˜๋˜์ง€ ์•Š์€ ๋ฌธ์ œ๋กœ ๋งŒ๋“ ๋‹ค. ๋ฌผ๋ฆฌ์ ์ธ ์›€์ง์ž„ ๋ชจ๋ธ๋ง์„ ํ†ตํ•ด ํ•ด์„์ ์ธ ์ ‘๊ทผ๋ฒ•์„ ์„ค๊ณ„ํ•˜๊ธฐ๋ณด๋‹ค๋Š” ๋จธ์‹ ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜์˜ ์ ‘๊ทผ๋ฒ•์€ ์ด๋Ÿฌํ•œ ์ž˜ ์ •์˜๋˜์ง€ ์•Š์€ ๋ฌธ์ œ๋ฅผ ํ‘ธ๋Š” ๋ณด๋‹ค ํ˜„์‹ค์ ์ธ ๋‹ต์ด ๋  ์ˆ˜ ์žˆ๋‹ค. ํŠนํžˆ ๋”ฅ ๋Ÿฌ๋‹์€ ์ตœ๊ทผ ์ปดํ“จํ„ฐ ๋น„์ „ ํ•™๊ณ„์—์„œ ํ‘œ์ค€์ ์ธ ๊ธฐ๋ฒ•์ด ๋˜์–ด ๊ฐ€๊ณ  ์žˆ๋‹ค. ๋ณธ ํ•™์œ„๋…ผ๋ฌธ์€ ์‚ฌ์ง„ ๋ฐ ๋น„๋””์˜ค ๋””๋ธ”๋Ÿฌ๋ง ๋ฌธ์ œ์— ๋Œ€ํ•ด ๋”ฅ ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜ ์†”๋ฃจ์…˜์„ ๋„์ž…ํ•˜๋ฉฐ ์—ฌ๋Ÿฌ ํ˜„์‹ค์ ์ธ ๋ฌธ์ œ๋ฅผ ๋‹ค๊ฐ์ ์œผ๋กœ ๋‹ค๋ฃฌ๋‹ค. ์ฒซ ๋ฒˆ์งธ๋กœ, ๋””๋ธ”๋Ÿฌ๋ง ๋ฌธ์ œ๋ฅผ ๋‹ค๋ฃจ๊ธฐ ์œ„ํ•œ ๋ฐ์ดํ„ฐ์…‹์„ ์ทจ๋“ํ•˜๋Š” ์ƒˆ๋กœ์šด ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ๋ชจ์…˜ ๋ธ”๋Ÿฌ๊ฐ€ ์žˆ๋Š” ์ด๋ฏธ์ง€์™€ ๊นจ๋—ํ•œ ์ด๋ฏธ์ง€๋ฅผ ์‹œ๊ฐ„์ ์œผ๋กœ ์ •๋ ฌ๋œ ์ƒํƒœ๋กœ ๋™์‹œ์— ์ทจ๋“ํ•˜๋Š” ๊ฒƒ์€ ์‰ฌ์šด ์ผ์ด ์•„๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ๊ฐ€ ๋ถ€์กฑํ•œ ๊ฒฝ์šฐ ๋””๋ธ”๋Ÿฌ๋ง ์•Œ๊ณ ๋ฆฌ์ฆ˜๋“ค์„ ํ‰๊ฐ€ํ•˜๋Š” ๊ฒƒ ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ์ง€๋„ํ•™์Šต ๊ธฐ๋ฒ•์„ ๊ฐœ๋ฐœํ•˜๋Š” ๊ฒƒ๋„ ๋ถˆ๊ฐ€๋Šฅํ•ด์ง„๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๊ณ ์† ๋น„๋””์˜ค๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์นด๋ฉ”๋ผ ์˜์ƒ ์ทจ๋“ ํŒŒ์ดํ”„๋ผ์ธ์„ ๋ชจ๋ฐฉํ•˜๋ฉด ์‹ค์ œ์ ์ธ ๋ชจ์…˜ ๋ธ”๋Ÿฌ ์ด๋ฏธ์ง€๋ฅผ ํ•ฉ์„ฑํ•˜๋Š” ๊ฒƒ์ด ๊ฐ€๋Šฅํ•˜๋‹ค. ๊ธฐ์กด์˜ ๋ธ”๋Ÿฌ ํ•ฉ์„ฑ ๊ธฐ๋ฒ•๋“ค๊ณผ ๋‹ฌ๋ฆฌ ์ œ์•ˆํ•˜๋Š” ๋ฐฉ๋ฒ•์€ ์—ฌ๋Ÿฌ ์›€์ง์ด๋Š” ํ”ผ์‚ฌ์ฒด๋“ค๊ณผ ๋‹ค์–‘ํ•œ ์˜์ƒ ๊นŠ์ด, ์›€์ง์ž„ ๊ฒฝ๊ณ„์—์„œ์˜ ๊ฐ€๋ฆฌ์›Œ์ง ๋“ฑ์œผ๋กœ ์ธํ•œ ์ž์—ฐ์Šค๋Ÿฌ์šด ๊ตญ์†Œ์  ๋ธ”๋Ÿฌ์˜ ๋ณต์žก๋„๋ฅผ ๋ฐ˜์˜ํ•  ์ˆ˜ ์žˆ๋‹ค. ๋‘ ๋ฒˆ์งธ๋กœ, ์ œ์•ˆ๋œ ๋ฐ์ดํ„ฐ์…‹์— ๊ธฐ๋ฐ˜ํ•˜์—ฌ ์ƒˆ๋กœ์šด ๋‹จ์ผ์˜์ƒ ๋””๋ธ”๋Ÿฌ๋ง์„ ์œ„ํ•œ ๋‰ด๋Ÿด ๋„คํŠธ์›Œํฌ ๊ตฌ์กฐ๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ์ตœ์ ํ™”๊ธฐ๋ฒ• ๊ธฐ๋ฐ˜ ์ด๋ฏธ์ง€ ๋””๋ธ”๋Ÿฌ๋ง ๋ฐฉ์‹์—์„œ ๋„๋ฆฌ ์“ฐ์ด๊ณ  ์žˆ๋Š” ์ ์ฐจ์  ๋ฏธ์„ธํ™” ์ ‘๊ทผ๋ฒ•์„ ๋ฐ˜์˜ํ•˜์—ฌ ๋‹ค์ค‘๊ทœ๋ชจ ๋‰ด๋Ÿด ๋„คํŠธ์›Œํฌ๋ฅผ ์„ค๊ณ„ํ•œ๋‹ค. ์ œ์•ˆ๋œ ๋‹ค์ค‘๊ทœ๋ชจ ๋ชจ๋ธ์€ ๋น„์Šทํ•œ ๋ณต์žก๋„๋ฅผ ๊ฐ€์ง„ ๋‹จ์ผ๊ทœ๋ชจ ๋ชจ๋ธ๋“ค๋ณด๋‹ค ๋†’์€ ๋ณต์› ์ •ํ™•๋„๋ฅผ ๋ณด์ธ๋‹ค. ์„ธ ๋ฒˆ์งธ๋กœ, ๋น„๋””์˜ค ๋””๋ธ”๋Ÿฌ๋ง์„ ์œ„ํ•œ ์ˆœํ™˜ ๋‰ด๋Ÿด ๋„คํŠธ์›Œํฌ ๋ชจ๋ธ ๊ตฌ์กฐ๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ๋””๋ธ”๋Ÿฌ๋ง์„ ํ†ตํ•ด ๊ณ ํ’ˆ์งˆ์˜ ๋น„๋””์˜ค๋ฅผ ์–ป๊ธฐ ์œ„ํ•ด์„œ๋Š” ๊ฐ ํ”„๋ ˆ์ž„๊ฐ„์˜ ์‹œ๊ฐ„์ ์ธ ์ •๋ณด์™€ ํ”„๋ ˆ์ž„ ๋‚ด๋ถ€์ ์ธ ์ •๋ณด๋ฅผ ๋ชจ๋‘ ์‚ฌ์šฉํ•ด์•ผ ํ•œ๋‹ค. ์ œ์•ˆํ•˜๋Š” ๋‚ด๋ถ€ํ”„๋ ˆ์ž„ ๋ฐ˜๋ณต์  ์—ฐ์‚ฐ๊ตฌ์กฐ๋Š” ๋‘ ์ •๋ณด๋ฅผ ํšจ๊ณผ์ ์œผ๋กœ ํ•จ๊ป˜ ์‚ฌ์šฉํ•จ์œผ๋กœ์จ ๋ชจ๋ธ ํŒŒ๋ผ๋ฏธํ„ฐ ์ˆ˜๋ฅผ ์ฆ๊ฐ€์‹œํ‚ค์ง€ ์•Š๊ณ ๋„ ๋””๋ธ”๋Ÿฌ ์ •ํ™•๋„๋ฅผ ํ–ฅ์ƒ์‹œํ‚จ๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ, ์ƒˆ๋กœ์šด ๋””๋ธ”๋Ÿฌ๋ง ๋ชจ๋ธ๋“ค์„ ๋ณด๋‹ค ์ž˜ ์ตœ์ ํ™”ํ•˜๊ธฐ ์œ„ํ•ด ๋กœ์Šค ํ•จ์ˆ˜๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ๊นจ๋—ํ•˜๊ณ  ๋˜๋ ทํ•œ ์‚ฌ์ง„ ํ•œ ์žฅ์œผ๋กœ๋ถ€ํ„ฐ ์ž์—ฐ์Šค๋Ÿฌ์šด ๋ชจ์…˜ ๋ธ”๋Ÿฌ๋ฅผ ๋งŒ๋“ค์–ด๋‚ด๋Š” ๊ฒƒ์€ ๋ธ”๋Ÿฌ๋ฅผ ์ œ๊ฑฐํ•˜๋Š” ๊ฒƒ๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ์–ด๋ ค์šด ๋ฌธ์ œ์ด๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ํ†ต์ƒ ์‚ฌ์šฉํ•˜๋Š” ๋กœ์Šค ํ•จ์ˆ˜๋กœ ์–ป์€ ๋””๋ธ”๋Ÿฌ๋ง ๋ฐฉ๋ฒ•๋“ค์€ ๋ธ”๋Ÿฌ๋ฅผ ์™„์ „ํžˆ ์ œ๊ฑฐํ•˜์ง€ ๋ชปํ•˜๋ฉฐ ๋””๋ธ”๋Ÿฌ๋œ ์ด๋ฏธ์ง€์˜ ๋‚จ์•„์žˆ๋Š” ๋ธ”๋Ÿฌ๋กœ๋ถ€ํ„ฐ ์›๋ž˜์˜ ๋ธ”๋Ÿฌ๋ฅผ ์žฌ๊ฑดํ•  ์ˆ˜ ์žˆ๋‹ค. ์ œ์•ˆํ•˜๋Š” ๋ฆฌ๋ธ”๋Ÿฌ๋ง ๋กœ์Šค ํ•จ์ˆ˜๋Š” ๋””๋ธ”๋Ÿฌ๋ง ์ˆ˜ํ–‰์‹œ ๋ชจ์…˜ ๋ธ”๋Ÿฌ๋ฅผ ๋ณด๋‹ค ์ž˜ ์ œ๊ฑฐํ•˜๋„๋ก ์„ค๊ณ„๋˜์—ˆ๋‹ค. ์ด์— ๋‚˜์•„๊ฐ€ ์ œ์•ˆํ•œ ์ž๊ธฐ์ง€๋„ํ•™์Šต ๊ณผ์ •์œผ๋กœ๋ถ€ํ„ฐ ํ…Œ์ŠคํŠธ์‹œ ๋ชจ๋ธ์ด ์ƒˆ๋กœ์šด ๋ฐ์ดํ„ฐ์— ์ ์‘ํ•˜๋„๋ก ํ•  ์ˆ˜ ์žˆ๋‹ค. ์ด๋ ‡๊ฒŒ ์ œ์•ˆ๋œ ๋ฐ์ดํ„ฐ์…‹, ๋ชจ๋ธ ๊ตฌ์กฐ, ๊ทธ๋ฆฌ๊ณ  ๋กœ์Šค ํ•จ์ˆ˜๋ฅผ ํ†ตํ•ด ๋”ฅ ๋Ÿฌ๋‹์— ๊ธฐ๋ฐ˜ํ•˜์—ฌ ๋‹จ์ผ ์˜์ƒ ๋ฐ ๋น„๋””์˜ค ๋””๋ธ”๋Ÿฌ๋ง ๊ธฐ๋ฒ•๋“ค์„ ์ œ์•ˆํ•œ๋‹ค. ๊ด‘๋ฒ”์œ„ํ•œ ์‹คํ—˜ ๊ฒฐ๊ณผ๋กœ๋ถ€ํ„ฐ ์ •๋Ÿ‰์  ๋ฐ ์ •์„ฑ์ ์œผ๋กœ ์ตœ์ฒจ๋‹จ ๋””๋ธ”๋Ÿฌ๋ง ์„ฑ๊ณผ๋ฅผ ์ฆ๋ช…ํ•œ๋‹ค.Obtaining a high-quality clean image is the ultimate goal of photography. In practice, daily photography is often taken in dynamic environments with moving objects as well as shaken cameras. The relative motion between the camera and the objects during the exposure causes motion blur in images and videos, degrading the visual quality. The degree of blur strength and the shape of motion trajectory varies by every image and every pixel in dynamic environments. The locally-varying property makes the removal of motion blur in images and videos severely ill-posed. Rather than designing analytic solutions with physical modelings, using machine learning-based approaches can serve as a practical solution for such a highly ill-posed problem. Especially, deep-learning has been the recent standard in computer vision literature. This dissertation introduces deep learning-based solutions for image and video deblurring by tackling practical issues in various aspects. First, a new way of constructing the datasets for dynamic scene deblurring task is proposed. It is nontrivial to simultaneously obtain a pair of the blurry and the sharp image that are temporally aligned. The lack of data prevents the supervised learning techniques to be developed as well as the evaluation of deblurring algorithms. By mimicking the camera image pipeline with high-speed videos, realistic blurry images could be synthesized. In contrast to the previous blur synthesis methods, the proposed approach can reflect the natural complex local blur from and multiple moving objects, varying depth, and occlusion at motion boundaries. Second, based on the proposed datasets, a novel neural network architecture for single-image deblurring task is presented. Adopting the coarse-to-fine approach that is widely used in energy optimization-based methods for image deblurring, a multi-scale neural network architecture is derived. Compared with the single-scale model with similar complexity, the multi-scale model exhibits higher accuracy and faster speed. Third, a light-weight recurrent neural network model architecture for video deblurring is proposed. In order to obtain a high-quality video from deblurring, it is important to exploit the intrinsic information in the target frame as well as the temporal relation between the neighboring frames. Taking benefits from both sides, the proposed intra-frame iterative scheme applied to the RNNs achieves accuracy improvements without increasing the number of model parameters. Lastly, a novel loss function is proposed to better optimize the deblurring models. Estimating a dynamic blur for a clean and sharp image without given motion information is another ill-posed problem. While the goal of deblurring is to completely get rid of motion blur, conventional loss functions fail to train neural networks to fulfill the goal, leaving the trace of blur in the deblurred images. The proposed reblurring loss functions are designed to better eliminate the motion blur and to produce sharper images. Furthermore, the self-supervised learning process facilitates the adaptation of the deblurring model at test-time. With the proposed datasets, model architectures, and the loss functions, the deep learning-based single-image and video deblurring methods are presented. Extensive experimental results demonstrate the state-of-the-art performance both quantitatively and qualitatively.1 Introduction 1 2 Generating Datasets for Dynamic Scene Deblurring 7 2.1 Introduction 7 2.2 GOPRO dataset 9 2.3 REDS dataset 11 2.4 Conclusion 18 3 Deep Multi-Scale Convolutional Neural Networks for Single Image Deblurring 19 3.1 Introduction 19 3.1.1 Related Works 21 3.1.2 Kernel-Free Learning for Dynamic Scene Deblurring 23 3.2 Proposed Method 23 3.2.1 Model Architecture 23 3.2.2 Training 26 3.3 Experiments 29 3.3.1 Comparison on GOPRO Dataset 29 3.3.2 Comparison on Kohler Dataset 33 3.3.3 Comparison on Lai et al. [54] dataset 33 3.3.4 Comparison on Real Dynamic Scenes 34 3.3.5 Effect of Adversarial Loss 34 3.4 Conclusion 41 4 Intra-Frame Iterative RNNs for Video Deblurring 43 4.1 Introduction 43 4.2 Related Works 46 4.3 Proposed Method 50 4.3.1 Recurrent Video Deblurring Networks 51 4.3.2 Intra-Frame Iteration Model 52 4.3.3 Regularization by Stochastic Training 56 4.4 Experiments 58 4.4.1 Datasets 58 4.4.2 Implementation details 59 4.4.3 Comparisons on GOPRO [72] dataset 59 4.4.4 Comparisons on [97] Dataset and Real Videos 60 4.5 Conclusion 61 5 Learning Loss Functions for Image Deblurring 67 5.1 Introduction 67 5.2 Related Works 71 5.3 Proposed Method 73 5.3.1 Clean Images are Hard to Reblur 73 5.3.2 Supervision from Reblurring Loss 75 5.3.3 Test-time Adaptation by Self-Supervision 76 5.4 Experiments 78 5.4.1 Effect of Reblurring Loss 78 5.4.2 Effect of Sharpness Preservation Loss 80 5.4.3 Comparison with Other Perceptual Losses 81 5.4.4 Effect of Test-time Adaptation 81 5.4.5 Comparison with State-of-The-Art Methods 82 5.4.6 Real World Image Deblurring 85 5.4.7 Combining Reblurring Loss with Other Perceptual Losses 86 5.4.8 Perception vs. Distortion Trade-Off 87 5.4.9 Visual Comparison of Loss Function 88 5.4.10 Implementation Details 89 5.4.11 Determining Reblurring Module Size 94 5.5 Conclusion 95 6 Conclusion 97 ๊ตญ๋ฌธ ์ดˆ๋ก 115 ๊ฐ์‚ฌ์˜ ๊ธ€ 117๋ฐ•

    Hierarchical Integration Diffusion Model for Realistic Image Deblurring

    Full text link
    Diffusion models (DMs) have recently been introduced in image deblurring and exhibited promising performance, particularly in terms of details reconstruction. However, the diffusion model requires a large number of inference iterations to recover the clean image from pure Gaussian noise, which consumes massive computational resources. Moreover, the distribution synthesized by the diffusion model is often misaligned with the target results, leading to restrictions in distortion-based metrics. To address the above issues, we propose the Hierarchical Integration Diffusion Model (HI-Diff), for realistic image deblurring. Specifically, we perform the DM in a highly compacted latent space to generate the prior feature for the deblurring process. The deblurring process is implemented by a regression-based method to obtain better distortion accuracy. Meanwhile, the highly compact latent space ensures the efficiency of the DM. Furthermore, we design the hierarchical integration module to fuse the prior into the regression-based model from multiple scales, enabling better generalization in complex blurry scenarios. Comprehensive experiments on synthetic and real-world blur datasets demonstrate that our HI-Diff outperforms state-of-the-art methods. Code and trained models are available at https://github.com/zhengchen1999/HI-Diff.Comment: Code is available at https://github.com/zhengchen1999/HI-Dif

    Joint Video Multi-Frame Interpolation and Deblurring under Unknown Exposure Time

    Full text link
    Natural videos captured by consumer cameras often suffer from low framerate and motion blur due to the combination of dynamic scene complexity, lens and sensor imperfection, and less than ideal exposure setting. As a result, computational methods that jointly perform video frame interpolation and deblurring begin to emerge with the unrealistic assumption that the exposure time is known and fixed. In this work, we aim ambitiously for a more realistic and challenging task - joint video multi-frame interpolation and deblurring under unknown exposure time. Toward this goal, we first adopt a variant of supervised contrastive learning to construct an exposure-aware representation from input blurred frames. We then train two U-Nets for intra-motion and inter-motion analysis, respectively, adapting to the learned exposure representation via gain tuning. We finally build our video reconstruction network upon the exposure and motion representation by progressive exposure-adaptive convolution and motion refinement. Extensive experiments on both simulated and real-world datasets show that our optimized method achieves notable performance gains over the state-of-the-art on the joint video x8 interpolation and deblurring task. Moreover, on the seemingly implausible x16 interpolation task, our method outperforms existing methods by more than 1.5 dB in terms of PSNR.Comment: Accepted by CVPR 2023, available at https://github.com/shangwei5/VIDU
    • โ€ฆ
    corecore