137 research outputs found
Learning Representations for Controllable Image Restoration
Deep Convolutional Neural Networks have sparked a renaissance in all the sub-fields of computer vision. Tremendous progress has been made in the area of image restoration. The research community has pushed the boundaries of image deblurring, super-resolution, and denoising. However, given a distorted image, most existing methods typically produce a single restored output. The tasks mentioned above are inherently ill-posed, leading to an infinite number of plausible solutions. This thesis focuses on designing image restoration techniques capable of producing multiple restored results and granting users more control over the restoration process. Towards this goal, we demonstrate how one could leverage the power of unsupervised representation learning.
Image restoration is vital when applied to distorted images of human faces due to their social significance. Generative Adversarial Networks enable an unprecedented level of generated facial details combined with smooth latent space. We leverage the power of GANs towards the goal of learning controllable neural face representations. We demonstrate how to learn an inverse mapping from image space to these latent representations, tuning these representations towards a specific task, and finally manipulating latent codes in these spaces. For example, we show how GANs and their inverse mappings enable the restoration and editing of faces in the context of extreme face super-resolution and the generation of novel view sharp videos from a single motion-blurred image of a face.
This thesis also addresses more general blind super-resolution, denoising, and scratch removal problems, where blur kernels and noise levels are unknown. We resort to contrastive representation learning and first learn the latent space of degradations. We demonstrate that the learned representation allows inference of ground-truth degradation parameters and can guide the restoration process. Moreover, it enables control over the amount of deblurring and denoising in the restoration via manipulation of latent degradation features
Continuous Facial Motion Deblurring
We introduce a novel framework for continuous facial motion deblurring that
restores the continuous sharp moment latent in a single motion-blurred face
image via a moment control factor. Although a motion-blurred image is the
accumulated signal of continuous sharp moments during the exposure time, most
existing single image deblurring approaches aim to restore a fixed number of
frames using multiple networks and training stages. To address this problem, we
propose a continuous facial motion deblurring network based on GAN (CFMD-GAN),
which is a novel framework for restoring the continuous moment latent in a
single motion-blurred face image with a single network and a single training
stage. To stabilize the network training, we train the generator to restore
continuous moments in the order determined by our facial motion-based
reordering process (FMR) utilizing domain-specific knowledge of the face.
Moreover, we propose an auxiliary regressor that helps our generator produce
more accurate images by estimating continuous sharp moments. Furthermore, we
introduce a control-adaptive (ContAda) block that performs spatially deformable
convolution and channel-wise attention as a function of the control factor.
Extensive experiments on the 300VW datasets demonstrate that the proposed
framework generates a various number of continuous output frames by varying the
moment control factor. Compared with the recent single-to-single image
deblurring networks trained with the same 300VW training set, the proposed
method show the superior performance in restoring the central sharp frame in
terms of perceptual metrics, including LPIPS, FID and Arcface identity
distance. The proposed method outperforms the existing single-to-video
deblurring method for both qualitative and quantitative comparisons
Motion deblurring of faces
Face analysis is a core part of computer vision, in which remarkable progress
has been observed in the past decades. Current methods achieve recognition and
tracking with invariance to fundamental modes of variation such as
illumination, 3D pose, expressions. Notwithstanding, a much less standing mode
of variation is motion deblurring, which however presents substantial
challenges in face analysis. Recent approaches either make oversimplifying
assumptions, e.g. in cases of joint optimization with other tasks, or fail to
preserve the highly structured shape/identity information. Therefore, we
propose a data-driven method that encourages identity preservation. The
proposed model includes two parallel streams (sub-networks): the first deblurs
the image, the second implicitly extracts and projects the identity of both the
sharp and the blurred image in similar subspaces. We devise a method for
creating realistic motion blur by averaging a variable number of frames to
train our model. The averaged images originate from a 2MF2 dataset with 10
million facial frames, which we introduce for the task. Considering deblurring
as an intermediate step, we utilize the deblurred outputs to conduct a thorough
experimentation on high-level face analysis tasks, i.e. landmark localization
and face verification. The experimental evaluation demonstrates the superiority
of our method
New Datasets, Models, and Optimization
ํ์๋
ผ๋ฌธ(๋ฐ์ฌ) -- ์์ธ๋ํ๊ต๋ํ์ : ๊ณต๊ณผ๋ํ ์ ๊ธฐยท์ ๋ณด๊ณตํ๋ถ, 2021.8. ์ํํ.์ฌ์ง ์ดฌ์์ ๊ถ๊ทน์ ์ธ ๋ชฉํ๋ ๊ณ ํ์ง์ ๊นจ๋ํ ์์์ ์ป๋ ๊ฒ์ด๋ค. ํ์ค์ ์ผ๋ก, ์ผ์์ ์ฌ์ง์ ์์ฃผ ํ๋ค๋ฆฐ ์นด๋ฉ๋ผ์ ์์ง์ด๋ ๋ฌผ์ฒด๊ฐ ์๋ ๋์ ํ๊ฒฝ์์ ์ฐ๋๋ค. ๋
ธ์ถ์๊ฐ ์ค์ ์นด๋ฉ๋ผ์ ํผ์ฌ์ฒด๊ฐ์ ์๋์ ์ธ ์์ง์์ ์ฌ์ง๊ณผ ๋์์์์ ๋ชจ์
๋ธ๋ฌ๋ฅผ ์ผ์ผํค๋ฉฐ ์๊ฐ์ ์ธ ํ์ง์ ์ ํ์ํจ๋ค. ๋์ ํ๊ฒฝ์์ ๋ธ๋ฌ์ ์ธ๊ธฐ์ ์์ง์์ ๋ชจ์์ ๋งค ์ด๋ฏธ์ง๋ง๋ค, ๊ทธ๋ฆฌ๊ณ ๋งค ํฝ์
๋ง๋ค ๋ค๋ฅด๋ค. ๊ตญ์ง์ ์ผ๋ก ๋ณํํ๋ ๋ธ๋ฌ์ ์ฑ์ง์ ์ฌ์ง๊ณผ ๋์์์์์ ๋ชจ์
๋ธ๋ฌ ์ ๊ฑฐ๋ฅผ ์ฌ๊ฐํ๊ฒ ํ๊ธฐ ์ด๋ ค์ฐ๋ฉฐ ํด๋ต์ด ํ๋๋ก ์ ํด์ง์ง ์์, ์ ์ ์๋์ง ์์ ๋ฌธ์ ๋ก ๋ง๋ ๋ค.
๋ฌผ๋ฆฌ์ ์ธ ์์ง์ ๋ชจ๋ธ๋ง์ ํตํด ํด์์ ์ธ ์ ๊ทผ๋ฒ์ ์ค๊ณํ๊ธฐ๋ณด๋ค๋ ๋จธ์ ๋ฌ๋ ๊ธฐ๋ฐ์ ์ ๊ทผ๋ฒ์ ์ด๋ฌํ ์ ์ ์๋์ง ์์ ๋ฌธ์ ๋ฅผ ํธ๋ ๋ณด๋ค ํ์ค์ ์ธ ๋ต์ด ๋ ์ ์๋ค. ํนํ ๋ฅ ๋ฌ๋์ ์ต๊ทผ ์ปดํจํฐ ๋น์ ํ๊ณ์์ ํ์ค์ ์ธ ๊ธฐ๋ฒ์ด ๋์ด ๊ฐ๊ณ ์๋ค. ๋ณธ ํ์๋
ผ๋ฌธ์ ์ฌ์ง ๋ฐ ๋น๋์ค ๋๋ธ๋ฌ๋ง ๋ฌธ์ ์ ๋ํด ๋ฅ ๋ฌ๋ ๊ธฐ๋ฐ ์๋ฃจ์
์ ๋์
ํ๋ฉฐ ์ฌ๋ฌ ํ์ค์ ์ธ ๋ฌธ์ ๋ฅผ ๋ค๊ฐ์ ์ผ๋ก ๋ค๋ฃฌ๋ค.
์ฒซ ๋ฒ์งธ๋ก, ๋๋ธ๋ฌ๋ง ๋ฌธ์ ๋ฅผ ๋ค๋ฃจ๊ธฐ ์ํ ๋ฐ์ดํฐ์
์ ์ทจ๋ํ๋ ์๋ก์ด ๋ฐฉ๋ฒ์ ์ ์ํ๋ค. ๋ชจ์
๋ธ๋ฌ๊ฐ ์๋ ์ด๋ฏธ์ง์ ๊นจ๋ํ ์ด๋ฏธ์ง๋ฅผ ์๊ฐ์ ์ผ๋ก ์ ๋ ฌ๋ ์ํ๋ก ๋์์ ์ทจ๋ํ๋ ๊ฒ์ ์ฌ์ด ์ผ์ด ์๋๋ค. ๋ฐ์ดํฐ๊ฐ ๋ถ์กฑํ ๊ฒฝ์ฐ ๋๋ธ๋ฌ๋ง ์๊ณ ๋ฆฌ์ฆ๋ค์ ํ๊ฐํ๋ ๊ฒ ๋ฟ๋ง ์๋๋ผ ์ง๋ํ์ต ๊ธฐ๋ฒ์ ๊ฐ๋ฐํ๋ ๊ฒ๋ ๋ถ๊ฐ๋ฅํด์ง๋ค. ๊ทธ๋ฌ๋ ๊ณ ์ ๋น๋์ค๋ฅผ ์ฌ์ฉํ์ฌ ์นด๋ฉ๋ผ ์์ ์ทจ๋ ํ์ดํ๋ผ์ธ์ ๋ชจ๋ฐฉํ๋ฉด ์ค์ ์ ์ธ ๋ชจ์
๋ธ๋ฌ ์ด๋ฏธ์ง๋ฅผ ํฉ์ฑํ๋ ๊ฒ์ด ๊ฐ๋ฅํ๋ค. ๊ธฐ์กด์ ๋ธ๋ฌ ํฉ์ฑ ๊ธฐ๋ฒ๋ค๊ณผ ๋ฌ๋ฆฌ ์ ์ํ๋ ๋ฐฉ๋ฒ์ ์ฌ๋ฌ ์์ง์ด๋ ํผ์ฌ์ฒด๋ค๊ณผ ๋ค์ํ ์์ ๊น์ด, ์์ง์ ๊ฒฝ๊ณ์์์ ๊ฐ๋ฆฌ์์ง ๋ฑ์ผ๋ก ์ธํ ์์ฐ์ค๋ฌ์ด ๊ตญ์์ ๋ธ๋ฌ์ ๋ณต์ก๋๋ฅผ ๋ฐ์ํ ์ ์๋ค.
๋ ๋ฒ์งธ๋ก, ์ ์๋ ๋ฐ์ดํฐ์
์ ๊ธฐ๋ฐํ์ฌ ์๋ก์ด ๋จ์ผ์์ ๋๋ธ๋ฌ๋ง์ ์ํ ๋ด๋ด ๋คํธ์ํฌ ๊ตฌ์กฐ๋ฅผ ์ ์ํ๋ค. ์ต์ ํ๊ธฐ๋ฒ ๊ธฐ๋ฐ ์ด๋ฏธ์ง ๋๋ธ๋ฌ๋ง ๋ฐฉ์์์ ๋๋ฆฌ ์ฐ์ด๊ณ ์๋ ์ ์ฐจ์ ๋ฏธ์ธํ ์ ๊ทผ๋ฒ์ ๋ฐ์ํ์ฌ ๋ค์ค๊ท๋ชจ ๋ด๋ด ๋คํธ์ํฌ๋ฅผ ์ค๊ณํ๋ค. ์ ์๋ ๋ค์ค๊ท๋ชจ ๋ชจ๋ธ์ ๋น์ทํ ๋ณต์ก๋๋ฅผ ๊ฐ์ง ๋จ์ผ๊ท๋ชจ ๋ชจ๋ธ๋ค๋ณด๋ค ๋์ ๋ณต์ ์ ํ๋๋ฅผ ๋ณด์ธ๋ค.
์ธ ๋ฒ์งธ๋ก, ๋น๋์ค ๋๋ธ๋ฌ๋ง์ ์ํ ์ํ ๋ด๋ด ๋คํธ์ํฌ ๋ชจ๋ธ ๊ตฌ์กฐ๋ฅผ ์ ์ํ๋ค. ๋๋ธ๋ฌ๋ง์ ํตํด ๊ณ ํ์ง์ ๋น๋์ค๋ฅผ ์ป๊ธฐ ์ํด์๋ ๊ฐ ํ๋ ์๊ฐ์ ์๊ฐ์ ์ธ ์ ๋ณด์ ํ๋ ์ ๋ด๋ถ์ ์ธ ์ ๋ณด๋ฅผ ๋ชจ๋ ์ฌ์ฉํด์ผ ํ๋ค. ์ ์ํ๋ ๋ด๋ถํ๋ ์ ๋ฐ๋ณต์ ์ฐ์ฐ๊ตฌ์กฐ๋ ๋ ์ ๋ณด๋ฅผ ํจ๊ณผ์ ์ผ๋ก ํจ๊ป ์ฌ์ฉํจ์ผ๋ก์จ ๋ชจ๋ธ ํ๋ผ๋ฏธํฐ ์๋ฅผ ์ฆ๊ฐ์ํค์ง ์๊ณ ๋ ๋๋ธ๋ฌ ์ ํ๋๋ฅผ ํฅ์์ํจ๋ค.
๋ง์ง๋ง์ผ๋ก, ์๋ก์ด ๋๋ธ๋ฌ๋ง ๋ชจ๋ธ๋ค์ ๋ณด๋ค ์ ์ต์ ํํ๊ธฐ ์ํด ๋ก์ค ํจ์๋ฅผ ์ ์ํ๋ค. ๊นจ๋ํ๊ณ ๋๋ ทํ ์ฌ์ง ํ ์ฅ์ผ๋ก๋ถํฐ ์์ฐ์ค๋ฌ์ด ๋ชจ์
๋ธ๋ฌ๋ฅผ ๋ง๋ค์ด๋ด๋ ๊ฒ์ ๋ธ๋ฌ๋ฅผ ์ ๊ฑฐํ๋ ๊ฒ๊ณผ ๋ง์ฐฌ๊ฐ์ง๋ก ์ด๋ ค์ด ๋ฌธ์ ์ด๋ค. ๊ทธ๋ฌ๋ ํต์ ์ฌ์ฉํ๋ ๋ก์ค ํจ์๋ก ์ป์ ๋๋ธ๋ฌ๋ง ๋ฐฉ๋ฒ๋ค์ ๋ธ๋ฌ๋ฅผ ์์ ํ ์ ๊ฑฐํ์ง ๋ชปํ๋ฉฐ ๋๋ธ๋ฌ๋ ์ด๋ฏธ์ง์ ๋จ์์๋ ๋ธ๋ฌ๋ก๋ถํฐ ์๋์ ๋ธ๋ฌ๋ฅผ ์ฌ๊ฑดํ ์ ์๋ค. ์ ์ํ๋ ๋ฆฌ๋ธ๋ฌ๋ง ๋ก์ค ํจ์๋ ๋๋ธ๋ฌ๋ง ์ํ์ ๋ชจ์
๋ธ๋ฌ๋ฅผ ๋ณด๋ค ์ ์ ๊ฑฐํ๋๋ก ์ค๊ณ๋์๋ค. ์ด์ ๋์๊ฐ ์ ์ํ ์๊ธฐ์ง๋ํ์ต ๊ณผ์ ์ผ๋ก๋ถํฐ ํ
์คํธ์ ๋ชจ๋ธ์ด ์๋ก์ด ๋ฐ์ดํฐ์ ์ ์ํ๋๋ก ํ ์ ์๋ค.
์ด๋ ๊ฒ ์ ์๋ ๋ฐ์ดํฐ์
, ๋ชจ๋ธ ๊ตฌ์กฐ, ๊ทธ๋ฆฌ๊ณ ๋ก์ค ํจ์๋ฅผ ํตํด ๋ฅ ๋ฌ๋์ ๊ธฐ๋ฐํ์ฌ ๋จ์ผ ์์ ๋ฐ ๋น๋์ค ๋๋ธ๋ฌ๋ง ๊ธฐ๋ฒ๋ค์ ์ ์ํ๋ค. ๊ด๋ฒ์ํ ์คํ ๊ฒฐ๊ณผ๋ก๋ถํฐ ์ ๋์ ๋ฐ ์ ์ฑ์ ์ผ๋ก ์ต์ฒจ๋จ ๋๋ธ๋ฌ๋ง ์ฑ๊ณผ๋ฅผ ์ฆ๋ช
ํ๋ค.Obtaining a high-quality clean image is the ultimate goal of photography. In practice, daily photography is often taken in dynamic environments with moving objects as well as shaken cameras. The relative motion between the camera and the objects during the exposure causes motion blur in images and videos, degrading the visual quality. The degree of blur strength and the shape of motion trajectory varies by every image and every pixel in dynamic environments. The locally-varying property makes the removal of motion blur in images and videos severely ill-posed.
Rather than designing analytic solutions with physical modelings, using machine learning-based approaches can serve as a practical solution for such a highly ill-posed problem. Especially, deep-learning has been the recent standard in computer vision literature. This dissertation introduces deep learning-based solutions for image and video deblurring by tackling practical issues in various aspects.
First, a new way of constructing the datasets for dynamic scene deblurring task is proposed. It is nontrivial to simultaneously obtain a pair of the blurry and the sharp image that are temporally aligned. The lack of data prevents the supervised learning techniques to be developed as well as the evaluation of deblurring algorithms. By mimicking the camera image pipeline with high-speed videos, realistic blurry images could be synthesized. In contrast to the previous blur synthesis methods, the proposed approach can reflect the natural complex local blur from and multiple moving objects, varying depth, and occlusion at motion boundaries.
Second, based on the proposed datasets, a novel neural network architecture for single-image deblurring task is presented. Adopting the coarse-to-fine approach that is widely used in energy optimization-based methods for image deblurring, a multi-scale neural network architecture is derived. Compared with the single-scale model with similar complexity, the multi-scale model exhibits higher accuracy and faster speed.
Third, a light-weight recurrent neural network model architecture for video deblurring is proposed. In order to obtain a high-quality video from deblurring, it is important to exploit the intrinsic information in the target frame as well as the temporal relation between the neighboring frames. Taking benefits from both sides, the proposed intra-frame iterative scheme applied to the RNNs achieves accuracy improvements without increasing the number of model parameters.
Lastly, a novel loss function is proposed to better optimize the deblurring models.
Estimating a dynamic blur for a clean and sharp image without given motion information is another ill-posed problem. While the goal of deblurring is to completely get rid of motion blur, conventional loss functions fail to train neural networks to fulfill the goal, leaving the trace of blur in the deblurred images. The proposed reblurring loss functions are designed to better eliminate the motion blur and to produce sharper images. Furthermore, the self-supervised learning process facilitates the adaptation of the deblurring model at test-time.
With the proposed datasets, model architectures, and the loss functions, the deep learning-based single-image and video deblurring methods are presented. Extensive experimental results demonstrate the state-of-the-art performance both quantitatively and qualitatively.1 Introduction 1
2 Generating Datasets for Dynamic Scene Deblurring 7
2.1 Introduction 7
2.2 GOPRO dataset 9
2.3 REDS dataset 11
2.4 Conclusion 18
3 Deep Multi-Scale Convolutional Neural Networks for Single Image Deblurring 19
3.1 Introduction 19
3.1.1 Related Works 21
3.1.2 Kernel-Free Learning for Dynamic Scene Deblurring 23
3.2 Proposed Method 23
3.2.1 Model Architecture 23
3.2.2 Training 26
3.3 Experiments 29
3.3.1 Comparison on GOPRO Dataset 29
3.3.2 Comparison on Kohler Dataset 33
3.3.3 Comparison on Lai et al. [54] dataset 33
3.3.4 Comparison on Real Dynamic Scenes 34
3.3.5 Effect of Adversarial Loss 34
3.4 Conclusion 41
4 Intra-Frame Iterative RNNs for Video Deblurring 43
4.1 Introduction 43
4.2 Related Works 46
4.3 Proposed Method 50
4.3.1 Recurrent Video Deblurring Networks 51
4.3.2 Intra-Frame Iteration Model 52
4.3.3 Regularization by Stochastic Training 56
4.4 Experiments 58
4.4.1 Datasets 58
4.4.2 Implementation details 59
4.4.3 Comparisons on GOPRO [72] dataset 59
4.4.4 Comparisons on [97] Dataset and Real Videos 60
4.5 Conclusion 61
5 Learning Loss Functions for Image Deblurring 67
5.1 Introduction 67
5.2 Related Works 71
5.3 Proposed Method 73
5.3.1 Clean Images are Hard to Reblur 73
5.3.2 Supervision from Reblurring Loss 75
5.3.3 Test-time Adaptation by Self-Supervision 76
5.4 Experiments 78
5.4.1 Effect of Reblurring Loss 78
5.4.2 Effect of Sharpness Preservation Loss 80
5.4.3 Comparison with Other Perceptual Losses 81
5.4.4 Effect of Test-time Adaptation 81
5.4.5 Comparison with State-of-The-Art Methods 82
5.4.6 Real World Image Deblurring 85
5.4.7 Combining Reblurring Loss with Other Perceptual Losses 86
5.4.8 Perception vs. Distortion Trade-Off 87
5.4.9 Visual Comparison of Loss Function 88
5.4.10 Implementation Details 89
5.4.11 Determining Reblurring Module Size 94
5.5 Conclusion 95
6 Conclusion 97
๊ตญ๋ฌธ ์ด๋ก 115
๊ฐ์ฌ์ ๊ธ 117๋ฐ
FCL-GAN: A Lightweight and Real-Time Baseline for Unsupervised Blind Image Deblurring
Blind image deblurring (BID) remains a challenging and significant task.
Benefiting from the strong fitting ability of deep learning, paired data-driven
supervised BID method has obtained great progress. However, paired data are
usually synthesized by hand, and the realistic blurs are more complex than
synthetic ones, which makes the supervised methods inept at modeling realistic
blurs and hinders their real-world applications. As such, unsupervised deep BID
method without paired data offers certain advantages, but current methods still
suffer from some drawbacks, e.g., bulky model size, long inference time, and
strict image resolution and domain requirements. In this paper, we propose a
lightweight and real-time unsupervised BID baseline, termed Frequency-domain
Contrastive Loss Constrained Lightweight CycleGAN (shortly, FCL-GAN), with
attractive properties, i.e., no image domain limitation, no image resolution
limitation, 25x lighter than SOTA, and 5x faster than SOTA. To guarantee the
lightweight property and performance superiority, two new collaboration units
called lightweight domain conversion unit(LDCU) and parameter-free
frequency-domain contrastive unit(PFCU) are designed. LDCU mainly implements
inter-domain conversion in lightweight manner. PFCU further explores the
similarity measure, external difference and internal connection between the
blurred domain and sharp domain images in frequency domain, without involving
extra parameters. Extensive experiments on several image datasets demonstrate
the effectiveness of our FCL-GAN in terms of performance, model size and
reference time
- โฆ