593 research outputs found
Motion Deblurring in the Wild
The task of image deblurring is a very ill-posed problem as both the image
and the blur are unknown. Moreover, when pictures are taken in the wild, this
task becomes even more challenging due to the blur varying spatially and the
occlusions between the object. Due to the complexity of the general image model
we propose a novel convolutional network architecture which directly generates
the sharp image.This network is built in three stages, and exploits the
benefits of pyramid schemes often used in blind deconvolution. One of the main
difficulties in training such a network is to design a suitable dataset. While
useful data can be obtained by synthetically blurring a collection of images,
more realistic data must be collected in the wild. To obtain such data we use a
high frame rate video camera and keep one frame as the sharp image and frame
average as the corresponding blurred image. We show that this realistic dataset
is key in achieving state-of-the-art performance and dealing with occlusions
Learning to Extract a Video Sequence from a Single Motion-Blurred Image
We present a method to extract a video sequence from a single motion-blurred
image. Motion-blurred images are the result of an averaging process, where
instant frames are accumulated over time during the exposure of the sensor.
Unfortunately, reversing this process is nontrivial. Firstly, averaging
destroys the temporal ordering of the frames. Secondly, the recovery of a
single frame is a blind deconvolution task, which is highly ill-posed. We
present a deep learning scheme that gradually reconstructs a temporal ordering
by sequentially extracting pairs of frames. Our main contribution is to
introduce loss functions invariant to the temporal order. This lets a neural
network choose during training what frame to output among the possible
combinations. We also address the ill-posedness of deblurring by designing a
network with a large receptive field and implemented via resampling to achieve
a higher computational efficiency. Our proposed method can successfully
retrieve sharp image sequences from a single motion blurred image and can
generalize well on synthetic and real datasets captured with different cameras
Convolutional Deblurring for Natural Imaging
In this paper, we propose a novel design of image deblurring in the form of
one-shot convolution filtering that can directly convolve with naturally
blurred images for restoration. The problem of optical blurring is a common
disadvantage to many imaging applications that suffer from optical
imperfections. Despite numerous deconvolution methods that blindly estimate
blurring in either inclusive or exclusive forms, they are practically
challenging due to high computational cost and low image reconstruction
quality. Both conditions of high accuracy and high speed are prerequisites for
high-throughput imaging platforms in digital archiving. In such platforms,
deblurring is required after image acquisition before being stored, previewed,
or processed for high-level interpretation. Therefore, on-the-fly correction of
such images is important to avoid possible time delays, mitigate computational
expenses, and increase image perception quality. We bridge this gap by
synthesizing a deconvolution kernel as a linear combination of Finite Impulse
Response (FIR) even-derivative filters that can be directly convolved with
blurry input images to boost the frequency fall-off of the Point Spread
Function (PSF) associated with the optical blur. We employ a Gaussian low-pass
filter to decouple the image denoising problem for image edge deblurring.
Furthermore, we propose a blind approach to estimate the PSF statistics for two
Gaussian and Laplacian models that are common in many imaging pipelines.
Thorough experiments are designed to test and validate the efficiency of the
proposed method using 2054 naturally blurred images across six imaging
applications and seven state-of-the-art deconvolution methods.Comment: 15 pages, for publication in IEEE Transaction Image Processin
Event-guided Multi-patch Network with Self-supervision for Non-uniform Motion Deblurring
Contemporary deep learning multi-scale deblurring models suffer from many
issues: 1) They perform poorly on non-uniformly blurred images/videos; 2)
Simply increasing the model depth with finer-scale levels cannot improve
deblurring; 3) Individual RGB frames contain a limited motion information for
deblurring; 4) Previous models have a limited robustness to spatial
transformations and noise. Below, we extend the DMPHN model by several
mechanisms to address the above issues: I) We present a novel self-supervised
event-guided deep hierarchical Multi-patch Network (MPN) to deal with blurry
images and videos via fine-to-coarse hierarchical localized representations;
II) We propose a novel stacked pipeline, StackMPN, to improve the deblurring
performance under the increased network depth; III) We propose an event-guided
architecture to exploit motion cues contained in videos to tackle complex blur
in videos; IV) We propose a novel self-supervised step to expose the model to
random transformations (rotations, scale changes), and make it robust to
Gaussian noises. Our MPN achieves the state of the art on the GoPro and
VideoDeblur datasets with a 40x faster runtime compared to current multi-scale
methods. With 30ms to process an image at 1280x720 resolution, it is the first
real-time deep motion deblurring model for 720p images at 30fps. For StackMPN,
we obtain significant improvements over 1.2dB on the GoPro dataset by
increasing the network depth. Utilizing the event information and
self-supervision further boost results to 33.83dB.Comment: International Journal of Computer Vision. arXiv admin note:
substantial text overlap with arXiv:1904.0346
- …