351 research outputs found
Semi-Blind Spatially-Variant Deconvolution in Optical Microscopy with Local Point Spread Function Estimation By Use Of Convolutional Neural Networks
We present a semi-blind, spatially-variant deconvolution technique aimed at
optical microscopy that combines a local estimation step of the point spread
function (PSF) and deconvolution using a spatially variant, regularized
Richardson-Lucy algorithm. To find the local PSF map in a computationally
tractable way, we train a convolutional neural network to perform regression of
an optical parametric model on synthetically blurred image patches. We
deconvolved both synthetic and experimentally-acquired data, and achieved an
improvement of image SNR of 1.00 dB on average, compared to other deconvolution
algorithms.Comment: 2018/02/11: submitted to IEEE ICIP 2018 - 2018/05/04: accepted to
IEEE ICIP 201
Recent Progress in Image Deblurring
This paper comprehensively reviews the recent development of image
deblurring, including non-blind/blind, spatially invariant/variant deblurring
techniques. Indeed, these techniques share the same objective of inferring a
latent sharp image from one or several corresponding blurry images, while the
blind deblurring techniques are also required to derive an accurate blur
kernel. Considering the critical role of image restoration in modern imaging
systems to provide high-quality images under complex environments such as
motion, undesirable lighting conditions, and imperfect system components, image
deblurring has attracted growing attention in recent years. From the viewpoint
of how to handle the ill-posedness which is a crucial issue in deblurring
tasks, existing methods can be grouped into five categories: Bayesian inference
framework, variational methods, sparse representation-based methods,
homography-based modeling, and region-based methods. In spite of achieving a
certain level of development, image deblurring, especially the blind case, is
limited in its success by complex application conditions which make the blur
kernel hard to obtain and be spatially variant. We provide a holistic
understanding and deep insight into image deblurring in this review. An
analysis of the empirical evidence for representative methods, practical
issues, as well as a discussion of promising future directions are also
presented.Comment: 53 pages, 17 figure
Retinal image analysis: Image processing and feature extraction oriented to the clinical task
Medical digital imaging has become a key element of modern health care procedures. It provides visual documentation and a permanent record for the patients, and most important the ability to extract quantitative information about many diseases. Modern ophthalmology relies on the advances in digital imaging and computing power. In this paper we present an overview of the results from the doctoral dissertation by Andrés G. Marrugo. This dissertation contributes to the digital analysis of retinal images and the problems that arise along the imaging pipeline of fundus photography, a field that is commonly referred to as retinal image analysis. We have dealt with and proposed solutions to problems that arise in retinal image acquisition and longitudinal monitoring of retinal disease evolution. Specifically, non-uniform illumination compensation, poor image quality, automated focusing, image segmentation, change detection, space-invariant (SI) and space-variant (SV) blind deconvolution (BD). Digital retinal image analysis can be effective and cost-efficient for disease management, computeraided diagnosis, screening and telemedicine and applicable to a variety of disorders such as glaucoma, macular degeneration, and retinopathy. © 2017. Sociedad Española de Óptica. All right reserved
Learning Lens Blur Fields
Optical blur is an inherent property of any lens system and is challenging to
model in modern cameras because of their complex optical elements. To tackle
this challenge, we introduce a high-dimensional neural representation of
blurand a practical method for acquiring
it. The lens blur field is a multilayer perceptron (MLP) designed to (1)
accurately capture variations of the lens 2D point spread function over image
plane location, focus setting and, optionally, depth and (2) represent these
variations parametrically as a single, sensor-specific function. The
representation models the combined effects of defocus, diffraction, aberration,
and accounts for sensor features such as pixel color filters and pixel-specific
micro-lenses. To learn the real-world blur field of a given device, we
formulate a generalized non-blind deconvolution problem that directly optimizes
the MLP weights using a small set of focal stacks as the only input. We also
provide a first-of-its-kind dataset of 5D blur fieldsfor smartphone cameras,
camera bodies equipped with a variety of lenses, etc. Lastly, we show that
acquired 5D blur fields are expressive and accurate enough to reveal, for the
first time, differences in optical behavior of smartphone devices of the same
make and model
Learning Optimization-inspired Image Propagation with Control Mechanisms and Architecture Augmentations for Low-level Vision
In recent years, building deep learning models from optimization perspectives
has becoming a promising direction for solving low-level vision problems. The
main idea of most existing approaches is to straightforwardly combine numerical
iterations with manually designed network architectures to generate image
propagations for specific kinds of optimization models. However, these
heuristic learning models often lack mechanisms to control the propagation and
rely on architecture engineering heavily. To mitigate the above issues, this
paper proposes a unified optimization-inspired deep image propagation framework
to aggregate Generative, Discriminative and Corrective (GDC for short)
principles for a variety of low-level vision tasks. Specifically, we first
formulate low-level vision tasks using a generic optimization objective and
construct our fundamental propagative modules from three different viewpoints,
i.e., the solution could be obtained/learned 1) in generative manner; 2) based
on discriminative metric, and 3) with domain knowledge correction. By designing
control mechanisms to guide image propagations, we then obtain convergence
guarantees of GDC for both fully- and partially-defined optimization
formulations. Furthermore, we introduce two architecture augmentation
strategies (i.e., normalization and automatic search) to respectively enhance
the propagation stability and task/data-adaption ability. Extensive experiments
on different low-level vision applications demonstrate the effectiveness and
flexibility of GDC.Comment: 15 page
- …