7,871 research outputs found
Semi-Blind Spatially-Variant Deconvolution in Optical Microscopy with Local Point Spread Function Estimation By Use Of Convolutional Neural Networks
We present a semi-blind, spatially-variant deconvolution technique aimed at
optical microscopy that combines a local estimation step of the point spread
function (PSF) and deconvolution using a spatially variant, regularized
Richardson-Lucy algorithm. To find the local PSF map in a computationally
tractable way, we train a convolutional neural network to perform regression of
an optical parametric model on synthetically blurred image patches. We
deconvolved both synthetic and experimentally-acquired data, and achieved an
improvement of image SNR of 1.00 dB on average, compared to other deconvolution
algorithms.Comment: 2018/02/11: submitted to IEEE ICIP 2018 - 2018/05/04: accepted to
IEEE ICIP 201
Recent Progress in Image Deblurring
This paper comprehensively reviews the recent development of image
deblurring, including non-blind/blind, spatially invariant/variant deblurring
techniques. Indeed, these techniques share the same objective of inferring a
latent sharp image from one or several corresponding blurry images, while the
blind deblurring techniques are also required to derive an accurate blur
kernel. Considering the critical role of image restoration in modern imaging
systems to provide high-quality images under complex environments such as
motion, undesirable lighting conditions, and imperfect system components, image
deblurring has attracted growing attention in recent years. From the viewpoint
of how to handle the ill-posedness which is a crucial issue in deblurring
tasks, existing methods can be grouped into five categories: Bayesian inference
framework, variational methods, sparse representation-based methods,
homography-based modeling, and region-based methods. In spite of achieving a
certain level of development, image deblurring, especially the blind case, is
limited in its success by complex application conditions which make the blur
kernel hard to obtain and be spatially variant. We provide a holistic
understanding and deep insight into image deblurring in this review. An
analysis of the empirical evidence for representative methods, practical
issues, as well as a discussion of promising future directions are also
presented.Comment: 53 pages, 17 figure
Light Field Blind Motion Deblurring
We study the problem of deblurring light fields of general 3D scenes captured
under 3D camera motion and present both theoretical and practical
contributions. By analyzing the motion-blurred light field in the primal and
Fourier domains, we develop intuition into the effects of camera motion on the
light field, show the advantages of capturing a 4D light field instead of a
conventional 2D image for motion deblurring, and derive simple methods of
motion deblurring in certain cases. We then present an algorithm to blindly
deblur light fields of general scenes without any estimation of scene geometry,
and demonstrate that we can recover both the sharp light field and the 3D
camera motion path of real and synthetically-blurred light fields.Comment: To be presented at CVPR 201
Characterization of the near-Earth Asteroid 2002NY40
In August 2002, the near-Earth asteroid 2002 NY40, made its closest approach
to the Earth. This provided an opportunity to study a near-Earth asteroid with
a variety of instruments. Several of the telescopes at the Maui Space
Surveillance System were trained at the asteroid and collected adaptive optics
images, photometry and spectroscopy. Analysis of the imagery reveals the
asteroid is triangular shaped with significant self-shadowing. The photometry
reveals a 20-hour period and the spectroscopy shows that the asteroid is a
Q-type
Convolutional Deblurring for Natural Imaging
In this paper, we propose a novel design of image deblurring in the form of
one-shot convolution filtering that can directly convolve with naturally
blurred images for restoration. The problem of optical blurring is a common
disadvantage to many imaging applications that suffer from optical
imperfections. Despite numerous deconvolution methods that blindly estimate
blurring in either inclusive or exclusive forms, they are practically
challenging due to high computational cost and low image reconstruction
quality. Both conditions of high accuracy and high speed are prerequisites for
high-throughput imaging platforms in digital archiving. In such platforms,
deblurring is required after image acquisition before being stored, previewed,
or processed for high-level interpretation. Therefore, on-the-fly correction of
such images is important to avoid possible time delays, mitigate computational
expenses, and increase image perception quality. We bridge this gap by
synthesizing a deconvolution kernel as a linear combination of Finite Impulse
Response (FIR) even-derivative filters that can be directly convolved with
blurry input images to boost the frequency fall-off of the Point Spread
Function (PSF) associated with the optical blur. We employ a Gaussian low-pass
filter to decouple the image denoising problem for image edge deblurring.
Furthermore, we propose a blind approach to estimate the PSF statistics for two
Gaussian and Laplacian models that are common in many imaging pipelines.
Thorough experiments are designed to test and validate the efficiency of the
proposed method using 2054 naturally blurred images across six imaging
applications and seven state-of-the-art deconvolution methods.Comment: 15 pages, for publication in IEEE Transaction Image Processin
Learning Wavefront Coding for Extended Depth of Field Imaging
Depth of field is an important factor of imaging systems that highly affects
the quality of the acquired spatial information. Extended depth of field (EDoF)
imaging is a challenging ill-posed problem and has been extensively addressed
in the literature. We propose a computational imaging approach for EDoF, where
we employ wavefront coding via a diffractive optical element (DOE) and we
achieve deblurring through a convolutional neural network. Thanks to the
end-to-end differentiable modeling of optical image formation and computational
post-processing, we jointly optimize the optical design, i.e., DOE, and the
deblurring through standard gradient descent methods. Based on the properties
of the underlying refractive lens and the desired EDoF range, we provide an
analytical expression for the search space of the DOE, which is instrumental in
the convergence of the end-to-end network. We achieve superior EDoF imaging
performance compared to the state of the art, where we demonstrate results with
minimal artifacts in various scenarios, including deep 3D scenes and broadband
imaging
- …