1,892 research outputs found
Non-Uniform Blind Deblurring with a Spatially-Adaptive Sparse Prior
Typical blur from camera shake often deviates from the standard uniform
convolutional script, in part because of problematic rotations which create
greater blurring away from some unknown center point. Consequently, successful
blind deconvolution requires the estimation of a spatially-varying or
non-uniform blur operator. Using ideas from Bayesian inference and convex
analysis, this paper derives a non-uniform blind deblurring algorithm with
several desirable, yet previously-unexplored attributes. The underlying
objective function includes a spatially adaptive penalty which couples the
latent sharp image, non-uniform blur operator, and noise level together. This
coupling allows the penalty to automatically adjust its shape based on the
estimated degree of local blur and image structure such that regions with large
blur or few prominent edges are discounted. Remaining regions with modest blur
and revealing edges therefore dominate the overall estimation process without
explicitly incorporating structure-selection heuristics. The algorithm can be
implemented using a majorization-minimization strategy that is virtually
parameter free. Detailed theoretical analysis and empirical validation on real
images serve to validate the proposed method
Reflection Separation and Deblurring of Plenoptic Images
In this paper, we address the problem of reflection removal and deblurring
from a single image captured by a plenoptic camera. We develop a two-stage
approach to recover the scene depth and high resolution textures of the
reflected and transmitted layers. For depth estimation in the presence of
reflections, we train a classifier through convolutional neural networks. For
recovering high resolution textures, we assume that the scene is composed of
planar regions and perform the reconstruction of each layer by using an
explicit form of the plenoptic camera point spread function. The proposed
framework also recovers the sharp scene texture with different motion blurs
applied to each layer. We demonstrate our method on challenging real and
synthetic images.Comment: ACCV 201
DAVANet: Stereo Deblurring with View Aggregation
Nowadays stereo cameras are more commonly adopted in emerging devices such as
dual-lens smartphones and unmanned aerial vehicles. However, they also suffer
from blurry images in dynamic scenes which leads to visual discomfort and
hampers further image processing. Previous works have succeeded in monocular
deblurring, yet there are few studies on deblurring for stereoscopic images. By
exploiting the two-view nature of stereo images, we propose a novel stereo
image deblurring network with Depth Awareness and View Aggregation, named
DAVANet. In our proposed network, 3D scene cues from the depth and varying
information from two views are incorporated, which help to remove complex
spatially-varying blur in dynamic scenes. Specifically, with our proposed
fusion network, we integrate the bidirectional disparities estimation and
deblurring into a unified framework. Moreover, we present a large-scale
multi-scene dataset for stereo deblurring, containing 20,637 blurry-sharp
stereo image pairs from 135 diverse sequences and their corresponding
bidirectional disparities. The experimental results on our dataset demonstrate
that DAVANet outperforms state-of-the-art methods in terms of accuracy, speed,
and model size.Comment: CVPR 2019 (Oral
Distributed Deblurring of Large Images of Wide Field-Of-View
Image deblurring is an economic way to reduce certain degradations (blur and
noise) in acquired images. Thus, it has become essential tool in high
resolution imaging in many applications, e.g., astronomy, microscopy or
computational photography. In applications such as astronomy and satellite
imaging, the size of acquired images can be extremely large (up to gigapixels)
covering wide field-of-view suffering from shift-variant blur. Most of the
existing image deblurring techniques are designed and implemented to work
efficiently on centralized computing system having multiple processors and a
shared memory. Thus, the largest image that can be handle is limited by the
size of the physical memory available on the system. In this paper, we propose
a distributed nonblind image deblurring algorithm in which several connected
processing nodes (with reasonable computational resources) process
simultaneously different portions of a large image while maintaining certain
coherency among them to finally obtain a single crisp image. Unlike the
existing centralized techniques, image deblurring in distributed fashion raises
several issues. To tackle these issues, we consider certain approximations that
trade-offs between the quality of deblurred image and the computational
resources required to achieve it. The experimental results show that our
algorithm produces the similar quality of images as the existing centralized
techniques while allowing distribution, and thus being cost effective for
extremely large images.Comment: 16 pages, 10 figures, submitted to IEEE Trans. on Image Processin
Spatially-Varying Blur Detection Based on Multiscale Fused and Sorted Transform Coefficients of Gradient Magnitudes
The detection of spatially-varying blur without having any information about
the blur type is a challenging task. In this paper, we propose a novel
effective approach to address the blur detection problem from a single image
without requiring any knowledge about the blur type, level, or camera settings.
Our approach computes blur detection maps based on a novel High-frequency
multiscale Fusion and Sort Transform (HiFST) of gradient magnitudes. The
evaluations of the proposed approach on a diverse set of blurry images with
different blur types, levels, and contents demonstrate that the proposed
algorithm performs favorably against the state-of-the-art methods qualitatively
and quantitatively.Comment: Accepted to CVPR 201
Modelling the Scene Dependent Imaging in Cameras with a Deep Neural Network
We present a novel deep learning framework that models the scene dependent
image processing inside cameras. Often called as the radiometric calibration,
the process of recovering RAW images from processed images (JPEG format in the
sRGB color space) is essential for many computer vision tasks that rely on
physically accurate radiance values. All previous works rely on the
deterministic imaging model where the color transformation stays the same
regardless of the scene and thus they can only be applied for images taken
under the manual mode. In this paper, we propose a data-driven approach to
learn the scene dependent and locally varying image processing inside cameras
under the automode. Our method incorporates both the global and the local scene
context into pixel-wise features via multi-scale pyramid of learnable histogram
layers. The results show that we can model the imaging pipeline of different
cameras that operate under the automode accurately in both directions (from RAW
to sRGB, from sRGB to RAW) and we show how we can apply our method to improve
the performance of image deblurring.Comment: To appear in ICCV 201
Spatio-Temporal Filter Adaptive Network for Video Deblurring
Video deblurring is a challenging task due to the spatially variant blur
caused by camera shake, object motions, and depth variations, etc. Existing
methods usually estimate optical flow in the blurry video to align consecutive
frames or approximate blur kernels. However, they tend to generate artifacts or
cannot effectively remove blur when the estimated optical flow is not accurate.
To overcome the limitation of separate optical flow estimation, we propose a
Spatio-Temporal Filter Adaptive Network (STFAN) for the alignment and
deblurring in a unified framework. The proposed STFAN takes both blurry and
restored images of the previous frame as well as blurry image of the current
frame as input, and dynamically generates the spatially adaptive filters for
the alignment and deblurring. We then propose the new Filter Adaptive
Convolutional (FAC) layer to align the deblurred features of the previous frame
with the current frame and remove the spatially variant blur from the features
of the current frame. Finally, we develop a reconstruction network which takes
the fusion of two transformed features to restore the clear frames. Both
quantitative and qualitative evaluation results on the benchmark datasets and
real-world videos demonstrate that the proposed algorithm performs favorably
against state-of-the-art methods in terms of accuracy, speed as well as model
size.Comment: ICCV 201
Image Restoration Using Joint Statistical Modeling in Space-Transform Domain
This paper presents a novel strategy for high-fidelity image restoration by
characterizing both local smoothness and nonlocal self-similarity of natural
images in a unified statistical manner. The main contributions are three-folds.
First, from the perspective of image statistics, a joint statistical modeling
(JSM) in an adaptive hybrid space-transform domain is established, which offers
a powerful mechanism of combining local smoothness and nonlocal self-similarity
simultaneously to ensure a more reliable and robust estimation. Second, a new
form of minimization functional for solving image inverse problem is formulated
using JSM under regularization-based framework. Finally, in order to make JSM
tractable and robust, a new Split-Bregman based algorithm is developed to
efficiently solve the above severely underdetermined inverse problem associated
with theoretical proof of convergence. Extensive experiments on image
inpainting, image deblurring and mixed Gaussian plus salt-and-pepper noise
removal applications verify the effectiveness of the proposed algorithm.Comment: 14 pages, 18 figures, 7 Tables, to be published in IEEE Transactions
on Circuits System and Video Technology (TCSVT). High resolution pdf version
and Code can be found at: http://idm.pku.edu.cn/staff/zhangjian/IRJSM
Generalized Video Deblurring for Dynamic Scenes
Several state-of-the-art video deblurring methods are based on a strong
assumption that the captured scenes are static. These methods fail to deblur
blurry videos in dynamic scenes. We propose a video deblurring method to deal
with general blurs inherent in dynamic scenes, contrary to other methods. To
handle locally varying and general blurs caused by various sources, such as
camera shake, moving objects, and depth variation in a scene, we approximate
pixel-wise kernel with bidirectional optical flows. Therefore, we propose a
single energy model that simultaneously estimates optical flows and latent
frames to solve our deblurring problem. We also provide a framework and
efficient solvers to optimize the energy model. By minimizing the proposed
energy function, we achieve significant improvements in removing blurs and
estimating accurate optical flows in blurry frames. Extensive experimental
results demonstrate the superiority of the proposed method in real and
challenging videos that state-of-the-art methods fail in either deblurring or
optical flow estimation.Comment: CVPR 2015 ora
Kernel Estimation from Salient Structure for Robust Motion Deblurring
Blind image deblurring algorithms have been improving steadily in the past
years. Most state-of-the-art algorithms, however, still cannot perform
perfectly in challenging cases, especially in large blur setting. In this
paper, we focus on how to estimate a good kernel estimate from a single blurred
image based on the image structure. We found that image details caused by
blurring could adversely affect the kernel estimation, especially when the blur
kernel is large. One effective way to eliminate these details is to apply image
denoising model based on the Total Variation (TV). First, we developed a novel
method for computing image structures based on TV model, such that the
structures undermining the kernel estimation will be removed. Second, to
mitigate the possible adverse effect of salient edges and improve the
robustness of kernel estimation, we applied a gradient selection method. Third,
we proposed a novel kernel estimation method, which is capable of preserving
the continuity and sparsity of the kernel and reducing the noises. Finally, we
developed an adaptive weighted spatial prior, for the purpose of preserving
sharp edges in latent image restoration. The effectiveness of our method is
demonstrated by experiments on various kinds of challenging examples.Comment: This work has been accepted by Signal Processing: Image
Communication, 201
- …