358 research outputs found
Rotational motion deblurring of a rigid object from a single image
Most previous motion deblurring methods restore the degraded image assuming a shift-invariant linear blur filter. These methods are not applicable if the blur is caused by spatially variant motions. In this paper, we model the physical properties of a 2-D rigid body movement and propose a practical framework to deblur rotational motions from a single image. Our main observation is that the transparency cue of a blurred object, which represents the motion blur formation from an imaging perspective, provides sufficient information in determining the object movements. Comparatively, single image motion deblurring using pixel color/gradient information has large uncertainties in motion representation and computation. Our results are produced by minimizing a new energy function combining rotation, possible translations, and the transparency map using an iterative optimizing process. The effectiveness of our method is demonstrated using challenging image examples. anteed since the convolution with a blur kernel is noninvertible. To tackle this problem, additional image priors, such as the global gradient distribution from clear images [7], are proposed. Some approaches use multiple images or additional visual cues [2, 20] to constrain the kernel estimation. (a) (b
Joint Blind Motion Deblurring and Depth Estimation of Light Field
Removing camera motion blur from a single light field is a challenging task
since it is highly ill-posed inverse problem. The problem becomes even worse
when blur kernel varies spatially due to scene depth variation and high-order
camera motion. In this paper, we propose a novel algorithm to estimate all blur
model variables jointly, including latent sub-aperture image, camera motion,
and scene depth from the blurred 4D light field. Exploiting multi-view nature
of a light field relieves the inverse property of the optimization by utilizing
strong depth cues and multi-view blur observation. The proposed joint
estimation achieves high quality light field deblurring and depth estimation
simultaneously under arbitrary 6-DOF camera motion and unconstrained scene
depth. Intensive experiment on real and synthetic blurred light field confirms
that the proposed algorithm outperforms the state-of-the-art light field
deblurring and depth estimation methods
The World of Fast Moving Objects
The notion of a Fast Moving Object (FMO), i.e. an object that moves over a
distance exceeding its size within the exposure time, is introduced. FMOs may,
and typically do, rotate with high angular speed. FMOs are very common in
sports videos, but are not rare elsewhere. In a single frame, such objects are
often barely visible and appear as semi-transparent streaks.
A method for the detection and tracking of FMOs is proposed. The method
consists of three distinct algorithms, which form an efficient localization
pipeline that operates successfully in a broad range of conditions. We show
that it is possible to recover the appearance of the object and its axis of
rotation, despite its blurred appearance. The proposed method is evaluated on a
new annotated dataset. The results show that existing trackers are inadequate
for the problem of FMO localization and a new approach is required. Two
applications of localization, temporal super-resolution and highlighting, are
presented
Correct spatially varying image blur by Projective Motion Richardson-Lucy Algorithm and Blur Image alignment
Master'sMASTER OF ENGINEERIN
Three-dimensional double helical DNA structure directly revealed from its X-ray fiber diffraction pattern by iterative phase retrieval
Coherent diffraction imaging (CDI) allows the retrieval of the structure of
an isolated object, such as a macromolecule, from its diffraction pattern. CDI
requires the fulfilment of two conditions: the imaging radiation must be
coherent and the object must be isolated. We discuss that it is possible to
directly retrieve the molecular structure from its diffraction pattern which
was acquired neither with coherent radiation nor from an individual molecule,
provided the molecule exhibits periodicity in one direction, as in the case of
fiber diffraction. We demonstrate that by applying iterative phase retrieval
methods to a fiber diffraction pattern, the repeating unit, that is, the
molecule structure, can directly be reconstructed without any prior modeling.
As an example, we recover the structure of the DNA double helix in
three-dimensions from its two-dimensional X-ray fiber diffraction pattern,
Photograph 51, acquired in the famous experiment by Raymond Gosling and
Rosalind Franklin, at a resolution of 3.4 Angstrom
Motion-From-Blur: 3D Shape and Motion Estimation of Motion-Blurred Objects in Videos
We propose a method for jointly estimating the 3D motion, 3D shape, and
appearance of highly motion-blurred objects from a video. To this end, we model
the blurred appearance of a fast moving object in a generative fashion by
parametrizing its 3D position, rotation, velocity, acceleration, bounces,
shape, and texture over the duration of a predefined time window spanning
multiple frames. Using differentiable rendering, we are able to estimate all
parameters by minimizing the pixel-wise reprojection error to the input video
via backpropagating through a rendering pipeline that accounts for motion blur
by averaging the graphics output over short time intervals. For that purpose,
we also estimate the camera exposure gap time within the same optimization. To
account for abrupt motion changes like bounces, we model the motion trajectory
as a piece-wise polynomial, and we are able to estimate the specific time of
the bounce at sub-frame accuracy. Experiments on established benchmark datasets
demonstrate that our method outperforms previous methods for fast moving object
deblurring and 3D reconstruction.Comment: CVPR 2022 camera-read
๋์ ํ๊ฒฝ ๋๋ธ๋ฌ๋ง์ ์ํ ์๋ก์ด ๋ชจ๋ธ, ์๋ก๊ธฐ์ฆ, ๊ทธ๋ฆฌ๊ณ ํด์์ ๊ดํ ์ฐ๊ตฌ
ํ์๋
ผ๋ฌธ (๋ฐ์ฌ)-- ์์ธ๋ํ๊ต ๋ํ์ : ์ ๊ธฐยท์ปดํจํฐ๊ณตํ๋ถ, 2016. 8. ์ด๊ฒฝ๋ฌด.Blurring artifacts are the most common flaws in photographs. To remove these artifacts, many deblurring methods which restore sharp images from blurry ones have been studied considerably in the field of computational photography. However, state-of-the-art deblurring methods are based on a strong assumption that the captured scenes are static, and thus a great many things still remain to be done. In particular, these conventional methods fail to deblur blurry images captured in dynamic environments which have spatially varying blurs caused by various sources such as camera shake including out-of-plane motion, moving objects, depth variation, and so on. Therefore, the deblurring problem becomes more difficult and deeply challenging for dynamic scenes.
Therefore, in this dissertation, addressing the deblurring problem of general dynamic scenes is a goal, and new solutions are introduced, that remove spatially varying blurs in dynamic scenes unlike conventional methods built on the assumption that the captured scenes are static.
Three kinds of dynamic scene deblurring methods are proposed to achieve this goal, and they are based on: (1) segmentation, (2) sharp exemplar, (3) kernel-parametrization.
The proposed approaches are introduced from segment-wise to pixel-wise approaches, and pixel-wise varying general blurs are handled in the end.
First, the segmentation-based deblurring method estimates the latent image, multiple different kernels, and associated segments jointly. With the aid of the joint approach, segmentation-based method could achieve accurate blur kernel within a segment, remove segment-wise varying blurs, and reduce artifacts at the motion boundaries which are common in conventional approaches. Next, an \textit{exemplar}-based deblurring method is proposed, which utilizes a sharp exemplar to estimate highly accurate blur kernel and overcomes the limitations of the segmentation-based method that cannot handle small or texture-less segments. Lastly, the deblurring method using kernel-parametrization approximates the locally varying kernel as linear using motion flows. Thus the proposed method based on kernel-parametrization is generally applicable to remove pixel-wise varying blurs, and estimates the latent image and motion flow at the same time.
With the proposed methods, significantly improved deblurring qualities are achieved, and intensive experimental evaluations demonstrate the superiority of the proposed methods in dynamic scene deblurring, in which state-of-the-art methods fail to deblur.Chapter 1 Introduction 1
Chapter 2 Image Deblurring with Segmentation 7
2.1 Introduction and Related Work 7
2.2 Segmentation-based Dynamic Scene Deblurring Model 11
2.2.1 Adaptive blur model selection 13
2.2.2 Regularization 14
2.3 Optimization 17
2.3.1 Sharp image restoration 18
2.3.2 Weight estimation 19
2.3.3 Kernel estimation 23
2.3.4 Overall procedure 25
2.4 Experiments 25
2.5 Summary 27
Chapter 3 Image Deblurring with Exemplar 33
3.1 Introduction and Related Work 35
3.2 Method Overview 37
3.3 Stage I: Exemplar Acquisition 38
3.3.1 Sharp image acquisition and preprocessing 38
3.3.2 Exemplar from blur-aware optical flow estimation 40
3.4 Stage II: Exemplar-based Deblurring 42
3.4.1 Exemplar-based latent image restoration 43
3.4.2 Motion-aware segmentation 44
3.4.3 Robust kernel estimation 45
3.4.4 Unified energy model and optimization 47
3.5 Stage III: Post-processing and Refinement 47
3.6 Experiments 49
3.7 Summary 53
Chapter 4 Image Deblurring with Kernel-Parametrization 57
4.1 Introduction and Related Work 59
4.2 Preliminary 60
4.3 Proposed Method 62
4.3.1 Image-statistics-guided motion 62
4.3.2 Adaptive variational deblurring model 64
4.4 Optimization 69
4.4.1 Motion estimation 70
4.4.2 Latent image restoration 72
4.4.3 Kernel re-initialization 73
4.5 Experiments 75
4.6 Summary 80
Chapter 5 Video Deblurring with Kernel-Parametrization 87
5.1 Introduction and Related Work 87
5.2 Generalized Video Deblurring 93
5.2.1 A new data model based on kernel-parametrization 94
5.2.2 A new optical flow constraint and temporal regularization 104
5.2.3 Spatial regularization 105
5.3 Optimization Framework 107
5.3.1 Sharp video restoration 108
5.3.2 Optical flows estimation 109
5.3.3 Defocus blur map estimation 110
5.4 Implementation Details 111
5.4.1 Initialization and duty cycle estimation 112
5.4.2 Occlusion detection and refinement 113
5.5 Motion Blur Dataset 114
5.5.1 Dataset generation 114
5.6 Experiments 116
5.7 Summary 120
Chapter 6 Conclusion 127
Bibliography 131
๊ตญ๋ฌธ ์ด๋ก 141Docto
- โฆ