198 research outputs found
Depth and IMU aided image deblurring based on deep learning
Abstract. With the wide usage and spread of camera phones, it becomes necessary to tackle the problem of the image blur. Embedding a camera in those small devices implies obviously small sensor size compared to sensors in professional cameras such as full-frame Digital Single-Lens Reflex (DSLR) cameras. As a result, this can dramatically affect the collected amount of photons on the image sensor. To overcome this, a long exposure time is needed, but with slight motions that often happen in handheld devices, experiencing image blur is inevitable. Our interest in this thesis is the motion blur that can be caused by the camera motion, scene (objects in the scene) motion, or generally the relative motion between the camera and scene. We use deep neural network (DNN) models in contrary to conventional (non DNN-based) methods which are computationally expensive and time-consuming. The process of deblurring an image is guided by utilizing the scene depth and camera’s inertial measurement unit (IMU) records. One of the challenges of adopting DNN solutions is that a relatively huge amount of data is needed to train the neural network. Moreover, several hyperparameters need to be tuned including the network architecture itself.
To train our network, a novel and promising method of synthesizing spatially-variant motion blur is proposed that considers the depth variations in the scene, which showed improvement of results against other methods. In addition to the synthetic dataset generation algorithm, a real blurry and sharp dataset collection setup is designed. This setup can provide thousands of real blurry and sharp images which can be of paramount benefit in DNN training or fine-tuning
Deep Model-Based Super-Resolution with Non-uniform Blur
We propose a state-of-the-art method for super-resolution with non-uniform
blur. Single-image super-resolution methods seek to restore a high-resolution
image from blurred, subsampled, and noisy measurements. Despite their
impressive performance, existing techniques usually assume a uniform blur
kernel. Hence, these techniques do not generalize well to the more general case
of non-uniform blur. Instead, in this paper, we address the more realistic and
computationally challenging case of spatially-varying blur. To this end, we
first propose a fast deep plug-and-play algorithm, based on linearized ADMM
splitting techniques, which can solve the super-resolution problem with
spatially-varying blur. Second, we unfold our iterative algorithm into a single
network and train it end-to-end. In this way, we overcome the intricacy of
manually tuning the parameters involved in the optimization scheme. Our
algorithm presents remarkable performance and generalizes well after a single
training to a large family of spatially-varying blur kernels, noise levels and
scale factors
Parallel Diffusion Models of Operator and Image for Blind Inverse Problems
Diffusion model-based inverse problem solvers have demonstrated
state-of-the-art performance in cases where the forward operator is known (i.e.
non-blind). However, the applicability of the method to blind inverse problems
has yet to be explored. In this work, we show that we can indeed solve a family
of blind inverse problems by constructing another diffusion prior for the
forward operator. Specifically, parallel reverse diffusion guided by gradients
from the intermediate stages enables joint optimization of both the forward
operator parameters as well as the image, such that both are jointly estimated
at the end of the parallel reverse diffusion procedure. We show the efficacy of
our method on two representative tasks -- blind deblurring, and imaging through
turbulence -- and show that our method yields state-of-the-art performance,
while also being flexible to be applicable to general blind inverse problems
when we know the functional forms.Comment: 25 pages, 13 figure
- …