2,535 research outputs found
Aperture Supervision for Monocular Depth Estimation
We present a novel method to train machine learning algorithms to estimate
scene depths from a single image, by using the information provided by a
camera's aperture as supervision. Prior works use a depth sensor's outputs or
images of the same scene from alternate viewpoints as supervision, while our
method instead uses images from the same viewpoint taken with a varying camera
aperture. To enable learning algorithms to use aperture effects as supervision,
we introduce two differentiable aperture rendering functions that use the input
image and predicted depths to simulate the depth-of-field effects caused by
real camera apertures. We train a monocular depth estimation network end-to-end
to predict the scene depths that best explain these finite aperture images as
defocus-blurred renderings of the input all-in-focus image.Comment: To appear at CVPR 2018 (updated to camera ready version
Recent Progress in Image Deblurring
This paper comprehensively reviews the recent development of image
deblurring, including non-blind/blind, spatially invariant/variant deblurring
techniques. Indeed, these techniques share the same objective of inferring a
latent sharp image from one or several corresponding blurry images, while the
blind deblurring techniques are also required to derive an accurate blur
kernel. Considering the critical role of image restoration in modern imaging
systems to provide high-quality images under complex environments such as
motion, undesirable lighting conditions, and imperfect system components, image
deblurring has attracted growing attention in recent years. From the viewpoint
of how to handle the ill-posedness which is a crucial issue in deblurring
tasks, existing methods can be grouped into five categories: Bayesian inference
framework, variational methods, sparse representation-based methods,
homography-based modeling, and region-based methods. In spite of achieving a
certain level of development, image deblurring, especially the blind case, is
limited in its success by complex application conditions which make the blur
kernel hard to obtain and be spatially variant. We provide a holistic
understanding and deep insight into image deblurring in this review. An
analysis of the empirical evidence for representative methods, practical
issues, as well as a discussion of promising future directions are also
presented.Comment: 53 pages, 17 figure
High-order myopic coronagraphic phase diversity (COFFEE) for wave-front control in high-contrast imaging systems
The estimation and compensation of quasi-static aberrations is mandatory to
reach the ultimate performance of high-contrast imaging systems. COFFEE is a
focal plane wave-front sensing method that consists in the extension of phase
diversity to high-contrast imaging systems. Based on a Bayesian approach, it
estimates the quasi-static aberrations from two focal plane images recorded
from the scientific camera itself. In this paper, we present COFFEE's extension
which allows an estimation of low and high order aberrations with nanometric
precision for any coronagraphic device. The performance is evaluated by
realistic simulations, performed in the SPHERE instrument framework. We develop
a myopic estimation that allows us to take into account an imperfect knowledge
on the used diversity phase. Lastly, we evaluate COFFEE's performance in a
compensation process, to optimize the contrast on the detector, and show it
allows one to reach the 10^-6 contrast required by SPHERE at a few resolution
elements from the star. Notably, we present a non-linear energy minimization
method which can be used to reach very high contrast levels (better than 10^-7
in a SPHERE-like context)Comment: Accepted in Optics Expres
Deep Eyes: Binocular Depth-from-Focus on Focal Stack Pairs
Human visual system relies on both binocular stereo cues and monocular
focusness cues to gain effective 3D perception. In computer vision, the two
problems are traditionally solved in separate tracks. In this paper, we present
a unified learning-based technique that simultaneously uses both types of cues
for depth inference. Specifically, we use a pair of focal stacks as input to
emulate human perception. We first construct a comprehensive focal stack
training dataset synthesized by depth-guided light field rendering. We then
construct three individual networks: a Focus-Net to extract depth from a single
focal stack, a EDoF-Net to obtain the extended depth of field (EDoF) image from
the focal stack, and a Stereo-Net to conduct stereo matching. We show how to
integrate them into a unified BDfF-Net to obtain high-quality depth maps.
Comprehensive experiments show that our approach outperforms the
state-of-the-art in both accuracy and speed and effectively emulates human
vision systems
Lyot-based Low Order Wavefront Sensor for Phase-mask Coronagraphs: Principle, Simulations and Laboratory Experiments
High performance coronagraphic imaging of faint structures around bright
stars at small angular separations requires fine control of tip, tilt and other
low order aberrations. When such errors occur upstream of a coronagraph, they
results in starlight leakage which reduces the dynamic range of the instrument.
This issue has been previously addressed for occulting coronagraphs by sensing
the starlight before or at the coronagraphic focal plane. One such solution,
the coronagraphic low order wave-front sensor (CLOWFS) uses a partially
reflective focal plane mask to measure pointing errors for Lyot-type
coronagraphs.
To deal with pointing errors in low inner working angle phase mask
coronagraphs which do not have a reflective focal plane mask, we have adapted
the CLOWFS technique. This new concept relies on starlight diffracted by the
focal plane phase mask being reflected by the Lyot stop towards a sensor which
reliably measures low order aberrations such as tip and tilt. This reflective
Lyot-based wavefront sensor is a linear reconstructor which provides high
sensitivity tip-tilt error measurements with phase mask coronagraphs.
Simulations show that the measurement accuracy of pointing errors with
realistic post adaptive optics residuals are approx. 10^-2 lambda/D per mode at
lambda = 1.6 micron for a four quadrant phase mask. In addition, we demonstrate
the open loop measurement pointing accuracy of 10^-2 lambda/D at 638 nm for a
four quadrant phase mask in the laboratory.Comment: 9 Pages, 11 Figures, to be published in PASP June 2014 issu
- …