22,785 research outputs found
Joint Blind Motion Deblurring and Depth Estimation of Light Field
Removing camera motion blur from a single light field is a challenging task
since it is highly ill-posed inverse problem. The problem becomes even worse
when blur kernel varies spatially due to scene depth variation and high-order
camera motion. In this paper, we propose a novel algorithm to estimate all blur
model variables jointly, including latent sub-aperture image, camera motion,
and scene depth from the blurred 4D light field. Exploiting multi-view nature
of a light field relieves the inverse property of the optimization by utilizing
strong depth cues and multi-view blur observation. The proposed joint
estimation achieves high quality light field deblurring and depth estimation
simultaneously under arbitrary 6-DOF camera motion and unconstrained scene
depth. Intensive experiment on real and synthetic blurred light field confirms
that the proposed algorithm outperforms the state-of-the-art light field
deblurring and depth estimation methods
WPU-Net: Boundary Learning by Using Weighted Propagation in Convolution Network
Deep learning has driven a great progress in natural and biological image
processing. However, in material science and engineering, there are often some
flaws and indistinctions in material microscopic images induced from complex
sample preparation, even due to the material itself, hindering the detection of
target objects. In this work, we propose WPU-net that redesigns the
architecture and weighted loss of U-Net, which forces the network to integrate
information from adjacent slices and pays more attention to the topology in
boundary detection task. Then, the WPU-net is applied into a typical material
example, i.e., the grain boundary detection of polycrystalline material.
Experiments demonstrate that the proposed method achieves promising performance
and outperforms state-of-the-art methods. Besides, we propose a new method for
object tracking between adjacent slices, which can effectively reconstruct 3D
structure of the whole material. Finally, we present a material microscopic
image dataset with the goal of advancing the state-of-the-art in image
processing for material science.Comment: technical repor
Rational-operator-based depth-from-defocus approach to scene reconstruction
This paper presents a rational-operator-based approach to depth from defocus (DfD) for the reconstruction of three-dimensional scenes from two-dimensional images, which enables fast DfD computation that is independent of scene textures. Two variants of the approach, one using the Gaussian rational operators (ROs) that are based on the Gaussian point spread function (PSF) and the second based on the generalized Gaussian PSF, are considered. A novel DfD correction method is also presented to further improve the performance of the approach. Experimental results are considered for real scenes and show that both approaches outperform existing RO-based methods
Online Video Deblurring via Dynamic Temporal Blending Network
State-of-the-art video deblurring methods are capable of removing non-uniform
blur caused by unwanted camera shake and/or object motion in dynamic scenes.
However, most existing methods are based on batch processing and thus need
access to all recorded frames, rendering them computationally demanding and
time consuming and thus limiting their practical use. In contrast, we propose
an online (sequential) video deblurring method based on a spatio-temporal
recurrent network that allows for real-time performance. In particular, we
introduce a novel architecture which extends the receptive field while keeping
the overall size of the network small to enable fast execution. In doing so,
our network is able to remove even large blur caused by strong camera shake
and/or fast moving objects. Furthermore, we propose a novel network layer that
enforces temporal consistency between consecutive frames by dynamic temporal
blending which compares and adaptively (at test time) shares features obtained
at different time steps. We show the superiority of the proposed method in an
extensive experimental evaluation.Comment: 10 page
Semi-Blind Spatially-Variant Deconvolution in Optical Microscopy with Local Point Spread Function Estimation By Use Of Convolutional Neural Networks
We present a semi-blind, spatially-variant deconvolution technique aimed at
optical microscopy that combines a local estimation step of the point spread
function (PSF) and deconvolution using a spatially variant, regularized
Richardson-Lucy algorithm. To find the local PSF map in a computationally
tractable way, we train a convolutional neural network to perform regression of
an optical parametric model on synthetically blurred image patches. We
deconvolved both synthetic and experimentally-acquired data, and achieved an
improvement of image SNR of 1.00 dB on average, compared to other deconvolution
algorithms.Comment: 2018/02/11: submitted to IEEE ICIP 2018 - 2018/05/04: accepted to
IEEE ICIP 201
- …