7,583 research outputs found
Learning Wavefront Coding for Extended Depth of Field Imaging
Depth of field is an important factor of imaging systems that highly affects
the quality of the acquired spatial information. Extended depth of field (EDoF)
imaging is a challenging ill-posed problem and has been extensively addressed
in the literature. We propose a computational imaging approach for EDoF, where
we employ wavefront coding via a diffractive optical element (DOE) and we
achieve deblurring through a convolutional neural network. Thanks to the
end-to-end differentiable modeling of optical image formation and computational
post-processing, we jointly optimize the optical design, i.e., DOE, and the
deblurring through standard gradient descent methods. Based on the properties
of the underlying refractive lens and the desired EDoF range, we provide an
analytical expression for the search space of the DOE, which is instrumental in
the convergence of the end-to-end network. We achieve superior EDoF imaging
performance compared to the state of the art, where we demonstrate results with
minimal artifacts in various scenarios, including deep 3D scenes and broadband
imaging
Recent Progress in Image Deblurring
This paper comprehensively reviews the recent development of image
deblurring, including non-blind/blind, spatially invariant/variant deblurring
techniques. Indeed, these techniques share the same objective of inferring a
latent sharp image from one or several corresponding blurry images, while the
blind deblurring techniques are also required to derive an accurate blur
kernel. Considering the critical role of image restoration in modern imaging
systems to provide high-quality images under complex environments such as
motion, undesirable lighting conditions, and imperfect system components, image
deblurring has attracted growing attention in recent years. From the viewpoint
of how to handle the ill-posedness which is a crucial issue in deblurring
tasks, existing methods can be grouped into five categories: Bayesian inference
framework, variational methods, sparse representation-based methods,
homography-based modeling, and region-based methods. In spite of achieving a
certain level of development, image deblurring, especially the blind case, is
limited in its success by complex application conditions which make the blur
kernel hard to obtain and be spatially variant. We provide a holistic
understanding and deep insight into image deblurring in this review. An
analysis of the empirical evidence for representative methods, practical
issues, as well as a discussion of promising future directions are also
presented.Comment: 53 pages, 17 figure
Joint Image and Depth Estimation With Mask-Based Lensless Cameras
Mask-based lensless cameras replace the lens of a conventional camera with a
custom mask. These cameras can potentially be very thin and even flexible.
Recently, it has been demonstrated that such mask-based cameras can recover
light intensity and depth information of a scene. Existing depth recovery
algorithms either assume that the scene consists of a small number of depth
planes or solve a sparse recovery problem over a large 3D volume. Both these
approaches fail to recover the scenes with large depth variations. In this
paper, we propose a new approach for depth estimation based on an alternating
gradient descent algorithm that jointly estimates a continuous depth map and
light distribution of the unknown scene from its lensless measurements. We
present simulation results on image and depth reconstruction for a variety of
3D test scenes. A comparison between the proposed algorithm and other method
shows that our algorithm is more robust for natural scenes with a large range
of depths. We built a prototype lensless camera and present experimental
results for reconstruction of intensity and depth maps of different real
objects
Computational Spectral Imaging: A Contemporary Overview
Spectral imaging collects and processes information along spatial and
spectral coordinates quantified in discrete voxels, which can be treated as a
3D spectral data cube. The spectral images (SIs) allow identifying objects,
crops, and materials in the scene through their spectral behavior. Since most
spectral optical systems can only employ 1D or maximum 2D sensors, it is
challenging to directly acquire the 3D information from available commercial
sensors. As an alternative, computational spectral imaging (CSI) has emerged as
a sensing tool where the 3D data can be obtained using 2D encoded projections.
Then, a computational recovery process must be employed to retrieve the SI. CSI
enables the development of snapshot optical systems that reduce acquisition
time and provide low computational storage costs compared to conventional
scanning systems. Recent advances in deep learning (DL) have allowed the design
of data-driven CSI to improve the SI reconstruction or, even more, perform
high-level tasks such as classification, unmixing, or anomaly detection
directly from 2D encoded projections. This work summarises the advances in CSI,
starting with SI and its relevance; continuing with the most relevant
compressive spectral optical systems. Then, CSI with DL will be introduced, and
the recent advances in combining the physical optical design with computational
DL algorithms to solve high-level tasks
- …