8,764 research outputs found
Aperture Supervision for Monocular Depth Estimation
We present a novel method to train machine learning algorithms to estimate
scene depths from a single image, by using the information provided by a
camera's aperture as supervision. Prior works use a depth sensor's outputs or
images of the same scene from alternate viewpoints as supervision, while our
method instead uses images from the same viewpoint taken with a varying camera
aperture. To enable learning algorithms to use aperture effects as supervision,
we introduce two differentiable aperture rendering functions that use the input
image and predicted depths to simulate the depth-of-field effects caused by
real camera apertures. We train a monocular depth estimation network end-to-end
to predict the scene depths that best explain these finite aperture images as
defocus-blurred renderings of the input all-in-focus image.Comment: To appear at CVPR 2018 (updated to camera ready version
Deep Eyes: Binocular Depth-from-Focus on Focal Stack Pairs
Human visual system relies on both binocular stereo cues and monocular
focusness cues to gain effective 3D perception. In computer vision, the two
problems are traditionally solved in separate tracks. In this paper, we present
a unified learning-based technique that simultaneously uses both types of cues
for depth inference. Specifically, we use a pair of focal stacks as input to
emulate human perception. We first construct a comprehensive focal stack
training dataset synthesized by depth-guided light field rendering. We then
construct three individual networks: a Focus-Net to extract depth from a single
focal stack, a EDoF-Net to obtain the extended depth of field (EDoF) image from
the focal stack, and a Stereo-Net to conduct stereo matching. We show how to
integrate them into a unified BDfF-Net to obtain high-quality depth maps.
Comprehensive experiments show that our approach outperforms the
state-of-the-art in both accuracy and speed and effectively emulates human
vision systems
Swept source optical coherence tomography Gabor fusion splicing technique for microscopy of thick samples using a deformable mirror
We present a swept source optical coherence tomography (OCT) system at 1060 nm equipped with a wavefront sensor at 830 nm and a deformable mirror in a closed-loop adaptive optics (AO) system. Due to the AO correction, the confocal profile of the interface optics becomes narrower than the OCT axial range, restricting the part of the B-scan (cross section) with good contrast. By actuating on the deformable mirror, the depth of the focus is changed and the system is used to demonstrate Gabor filtering in order to produce B-scan OCT images with enhanced sensitivity throughout the axial range from a Drosophila larvae. The focus adjustment is achieved by manipulating the curvature of the deformable mirror between two user-defined limits. Particularities of controlling the focus for Gabor filtering using the deformable mirror are presented. © 2015 Society of Photo-Optical Instrumentation Engineers
Extended depth-of-field imaging and ranging in a snapshot
Traditional approaches to imaging require that an increase in depth of field is associated with a reduction in
numerical aperture, and hence with a reduction in resolution and optical throughput. In their seminal
work, Dowski and Cathey reported how the asymmetric point-spread function generated by a cubic-phase
aberration encodes the detected image such that digital recovery can yield images with an extended depth of
field without sacrificing resolution [Appl. Opt. 34, 1859 (1995)]. Unfortunately recovered images are
generally visibly degraded by artifacts arising from subtle variations in point-spread functions with defocus.
We report a technique that involves determination of the spatially variant translation of image components
that accompanies defocus to enable determination of spatially variant defocus. This in turn enables recovery
of artifact-free, extended depth-of-field images together with a two-dimensional defocus and range map
of the imaged scene. We demonstrate the technique for high-quality macroscopic and microscopic imaging
of scenes presenting an extended defocus of up to two waves, and for generation of defocus maps with an
uncertainty of 0.036 waves
- …