233 research outputs found
Joint Blind Motion Deblurring and Depth Estimation of Light Field
Removing camera motion blur from a single light field is a challenging task
since it is highly ill-posed inverse problem. The problem becomes even worse
when blur kernel varies spatially due to scene depth variation and high-order
camera motion. In this paper, we propose a novel algorithm to estimate all blur
model variables jointly, including latent sub-aperture image, camera motion,
and scene depth from the blurred 4D light field. Exploiting multi-view nature
of a light field relieves the inverse property of the optimization by utilizing
strong depth cues and multi-view blur observation. The proposed joint
estimation achieves high quality light field deblurring and depth estimation
simultaneously under arbitrary 6-DOF camera motion and unconstrained scene
depth. Intensive experiment on real and synthetic blurred light field confirms
that the proposed algorithm outperforms the state-of-the-art light field
deblurring and depth estimation methods
Learning to Extract a Video Sequence from a Single Motion-Blurred Image
We present a method to extract a video sequence from a single motion-blurred
image. Motion-blurred images are the result of an averaging process, where
instant frames are accumulated over time during the exposure of the sensor.
Unfortunately, reversing this process is nontrivial. Firstly, averaging
destroys the temporal ordering of the frames. Secondly, the recovery of a
single frame is a blind deconvolution task, which is highly ill-posed. We
present a deep learning scheme that gradually reconstructs a temporal ordering
by sequentially extracting pairs of frames. Our main contribution is to
introduce loss functions invariant to the temporal order. This lets a neural
network choose during training what frame to output among the possible
combinations. We also address the ill-posedness of deblurring by designing a
network with a large receptive field and implemented via resampling to achieve
a higher computational efficiency. Our proposed method can successfully
retrieve sharp image sequences from a single motion blurred image and can
generalize well on synthetic and real datasets captured with different cameras
Simultaneous Stereo Video Deblurring and Scene Flow Estimation
Videos for outdoor scene often show unpleasant blur effects due to the large
relative motion between the camera and the dynamic objects and large depth
variations. Existing works typically focus monocular video deblurring. In this
paper, we propose a novel approach to deblurring from stereo videos. In
particular, we exploit the piece-wise planar assumption about the scene and
leverage the scene flow information to deblur the image. Unlike the existing
approach [31] which used a pre-computed scene flow, we propose a single
framework to jointly estimate the scene flow and deblur the image, where the
motion cues from scene flow estimation and blur information could reinforce
each other, and produce superior results than the conventional scene flow
estimation or stereo deblurring methods. We evaluate our method extensively on
two available datasets and achieve significant improvement in flow estimation
and removing the blur effect over the state-of-the-art methods.Comment: Accepted to IEEE International Conference on Computer Vision and
Pattern Recognition (CVPR) 201
통합시스템을이용한 다시점스테레오 매칭과영상복원
학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2017. 2. 이경무.Estimating camera pose and scene structures from seriously degraded images is challenging problem. Most existing multi-view stereo algorithms assume high-quality input images and therefore have unreliable results for blurred, noisy, or low-resolution images. Experimental results show that the approach of using off-the-shelf image reconstruction algorithms as independent preprocessing is generally ineffective or even sometimes counterproductive. This is because naive frame-wise image reconstruction methods fundamentally ignore the consistency between images, although they seem to produce visually plausible results.
In this thesis, from the fact that image reconstruction and multi-view stereo problems are interrelated, we present a unified framework to solve these problems jointly. The validity of this approach is empirically verified for four different problems, dense depth map reconstruction, camera pose estimation, super-resolution, and deblurring from images obtained by a single moving camera. By reflecting the physical imaging process, we cast our objective into a cost minimization problem, and solve the solution using alternate optimization techniques. Experiments show that the proposed method can restore high-quality depth maps from seriously degraded images for both synthetic and real video, as opposed to the failure of simple multi-view stereo methods. Our algorithm also produces superior super-resolution and deblurring results compared to simple preprocessing with conventional super-resolution and deblurring techniques.
Moreover, we show that the proposed framework can be generalized to handle more common scenarios. First, it can solve image reconstruction and multi-view stereo problems for multi-view single-shot images captured by a light field camera. By using information of calibrated multi-view images, it recovers the motions of individual objects in the input image as well as the unknown camera motion during the shutter time.
The contribution of this thesis is proposing a new perspective on the solution of the existing computer vision problems from an integrated viewpoint. We show that by solving interrelated problems jointly, we can obtain physically more plausible solution and better performance, especially when input images are challenging. The proposed optimization algorithm also makes our algorithm more practical in terms of computational complexity.1 Introduction 1
1.1 Outline of Dissertation 2
2 Background 5
3 Generalized Imaging Model 9
3.1 Camera Projection Model 9
3.2 Depth and Warping Operation 11
3.3 Representation of Camera Pose in SE(3) 12
3.4 Proposed Imaging Model 12
4 Rendering Synthetic Datasets 17
4.1 Making Blurred Image Sequences using Depth-based Image Rendering 18
4.2 Making Blurred Image Sequences using Blender 18
5 A Unified Framework for Single-shot Multi-view Images 21
5.1 Introduction 21
5.2 Related Works 24
5.3 Deblurring with 4D Light Fields 27
5.3.1 Motion Blur Formulation in Light Fields 27
5.3.2 Initialization 28
5.4 Joint Estimation 30
5.4.1 Energy Formulation 30
5.4.2 Update Latent Image 31
5.4.3 Update Camera Pose and Depth map 33
5.5 Experimental Results 34
5.5.1 Synthetic Data 34
5.5.2 Real Data 36
5.6 Conclusion 37
6 A Unified Framework for a Monocular Image Sequence 41
6.1 Introduction 41
6.2 Related Works 44
6.3 Modeling Imaging Process 46
6.4 Unified Energy Formulation 47
6.4.1 Matching term 47
6.4.2 Self-consistency term 48
6.4.3 Regularization term 49
6.5 Optimization 50
6.5.1 Update of the depth maps and camera poses 51
6.5.2 Update of the latent images . 52
6.5.3 Initialization 53
6.5.4 Occlusion Handling 54
6.6 Experimental Results 54
6.6.1 Synthetic datasets 55
6.6.2 Real datasets 61
6.6.3 The effect of parameters 65
6.7 Conclusion 66
7 A Unified Framework for SLAM 69
7.1 Motivation 69
7.2 Baseline 70
7.3 Proposed Method 72
7.4 Experimental Results 73
7.4.1 Quantitative comparison 73
7.4.2 Qualitative results 77
7.4.3 Runtime 79
7.5 Conclusion 80
8 Conclusion 83
8.1 Summary and Contribution of the Dissertation 83
8.2 Future Works 84
Bibliography 86
초록 94Docto
Recent Progress in Image Deblurring
This paper comprehensively reviews the recent development of image
deblurring, including non-blind/blind, spatially invariant/variant deblurring
techniques. Indeed, these techniques share the same objective of inferring a
latent sharp image from one or several corresponding blurry images, while the
blind deblurring techniques are also required to derive an accurate blur
kernel. Considering the critical role of image restoration in modern imaging
systems to provide high-quality images under complex environments such as
motion, undesirable lighting conditions, and imperfect system components, image
deblurring has attracted growing attention in recent years. From the viewpoint
of how to handle the ill-posedness which is a crucial issue in deblurring
tasks, existing methods can be grouped into five categories: Bayesian inference
framework, variational methods, sparse representation-based methods,
homography-based modeling, and region-based methods. In spite of achieving a
certain level of development, image deblurring, especially the blind case, is
limited in its success by complex application conditions which make the blur
kernel hard to obtain and be spatially variant. We provide a holistic
understanding and deep insight into image deblurring in this review. An
analysis of the empirical evidence for representative methods, practical
issues, as well as a discussion of promising future directions are also
presented.Comment: 53 pages, 17 figure
- …