310 research outputs found
Blind Image Deblurring via Reweighted Graph Total Variation
Blind image deblurring, i.e., deblurring without knowledge of the blur
kernel, is a highly ill-posed problem. The problem can be solved in two parts:
i) estimate a blur kernel from the blurry image, and ii) given estimated blur
kernel, de-convolve blurry input to restore the target image. In this paper, by
interpreting an image patch as a signal on a weighted graph, we first argue
that a skeleton image---a proxy that retains the strong gradients of the target
but smooths out the details---can be used to accurately estimate the blur
kernel and has a unique bi-modal edge weight distribution. We then design a
reweighted graph total variation (RGTV) prior that can efficiently promote
bi-modal edge weight distribution given a blurry patch. However, minimizing a
blind image deblurring objective with RGTV results in a non-convex
non-differentiable optimization problem. We propose a fast algorithm that
solves for the skeleton image and the blur kernel alternately. Finally with the
computed blur kernel, recent non-blind image deblurring algorithms can be
applied to restore the target image. Experimental results show that our
algorithm can robustly estimate the blur kernel with large kernel size, and the
reconstructed sharp image is competitive against the state-of-the-art methods.Comment: 5 pages, submitted to IEEE International Conference on Acoustics,
Speech and Signal Processing, Calgary, Alberta, Canada, April, 201
Motion Deblurring in the Wild
The task of image deblurring is a very ill-posed problem as both the image
and the blur are unknown. Moreover, when pictures are taken in the wild, this
task becomes even more challenging due to the blur varying spatially and the
occlusions between the object. Due to the complexity of the general image model
we propose a novel convolutional network architecture which directly generates
the sharp image.This network is built in three stages, and exploits the
benefits of pyramid schemes often used in blind deconvolution. One of the main
difficulties in training such a network is to design a suitable dataset. While
useful data can be obtained by synthetically blurring a collection of images,
more realistic data must be collected in the wild. To obtain such data we use a
high frame rate video camera and keep one frame as the sharp image and frame
average as the corresponding blurred image. We show that this realistic dataset
is key in achieving state-of-the-art performance and dealing with occlusions
Understanding Kernel Size in Blind Deconvolution
Most blind deconvolution methods usually pre-define a large kernel size to
guarantee the support domain. Blur kernel estimation error is likely to be
introduced, yielding severe artifacts in deblurring results. In this paper, we
first theoretically and experimentally analyze the mechanism to estimation
error in oversized kernel, and show that it holds even on blurry images without
noises. Then to suppress this adverse effect, we propose a low rank-based
regularization on blur kernel to exploit the structural information in degraded
kernels, by which larger-kernel effect can be effectively suppressed. And we
propose an efficient optimization algorithm to solve it. Experimental results
on benchmark datasets show that the proposed method is comparable with the
state-of-the-arts by accordingly setting proper kernel size, and performs much
better in handling larger-size kernels quantitatively and qualitatively. The
deblurring results on real-world blurry images further validate the
effectiveness of the proposed method.Comment: Accepted by WACV 201
- …