2,115 research outputs found
Computational localization microscopy with extended axial range
A new single-aperture 3D particle-localization and tracking technique is presented that demonstrates an increase in depth range by more than an order of magnitude without compromising optical resolution and throughput. We exploit the extended depth range and depth-dependent translation of an Airy-beam PSF for 3D localization over an extended volume in a single snapshot. The technique is applicable to all bright-field and fluorescence modalities for particle localization and tracking, ranging from super-resolution microscopy through to the tracking of fluorescent beads and endogenous particles within cells. We demonstrate and validate its application to real-time 3D velocity imaging of fluid flow in capillaries using fluorescent tracer beads. An axial localization precision of 50 nm was obtained over a depth range of 120μm using a 0.4NA, 20× microscope objective. We believe this to be the highest ratio of axial range-to-precision reported to date
Aperture Supervision for Monocular Depth Estimation
We present a novel method to train machine learning algorithms to estimate
scene depths from a single image, by using the information provided by a
camera's aperture as supervision. Prior works use a depth sensor's outputs or
images of the same scene from alternate viewpoints as supervision, while our
method instead uses images from the same viewpoint taken with a varying camera
aperture. To enable learning algorithms to use aperture effects as supervision,
we introduce two differentiable aperture rendering functions that use the input
image and predicted depths to simulate the depth-of-field effects caused by
real camera apertures. We train a monocular depth estimation network end-to-end
to predict the scene depths that best explain these finite aperture images as
defocus-blurred renderings of the input all-in-focus image.Comment: To appear at CVPR 2018 (updated to camera ready version
The coronagraphic Modal Wavefront Sensor: a hybrid focal-plane sensor for the high-contrast imaging of circumstellar environments
The raw coronagraphic performance of current high-contrast imaging
instruments is limited by the presence of a quasi-static speckle (QSS)
background, resulting from instrumental non-common path errors (NCPEs). Rapid
development of efficient speckle subtraction techniques in data reduction has
enabled final contrasts of up to 10-6 to be obtained, however it remains
preferable to eliminate the underlying NCPEs at the source. In this work we
introduce the coronagraphic Modal Wavefront Sensor (cMWS), a new wavefront
sensor suitable for real-time NCPE correction. This pupil-plane optic combines
the apodizing phase plate coronagraph with a holographic modal wavefront
sensor, to provide simultaneous coronagraphic imaging and focal-plane wavefront
sensing using the science point spread function. We first characterise the
baseline performance of the cMWS via idealised closed-loop simulations, showing
that the sensor successfully recovers diffraction-limited coronagraph
performance over an effective dynamic range of +/-2.5 radians root-mean-square
(RMS) wavefront error within 2-10 iterations. We then present the results of
initial on-sky testing at the William Herschel Telescope, and demonstrate that
the sensor is able to retrieve injected wavefront aberrations to an accuracy of
10nm RMS under realistic seeing conditions. We also find that the cMWS is
capable of real-time broadband measurement of atmospheric wavefront variance at
a cadence of 50Hz across an uncorrected telescope sub-aperture. When combined
with a suitable closed-loop adaptive optics system, the cMWS holds the
potential to deliver an improvement in raw contrast of up to two orders of
magnitude over the uncorrected QSS floor. Such a sensor would be eminently
suitable for the direct imaging and spectroscopy of exoplanets with both
existing and future instruments, including EPICS and METIS for the E-ELT.Comment: 14 pages, 12 figures: accepted for publication in Astronomy &
Astrophysic
Novel Approach to Ocular Photoscreening
Photoscreening is a technique that is typically applied in mass pediatric vision screening due to advantage of its objective, binocular, and cost-effective nature. Through the retinal reflex image, ocular alignment and refractive status are evaluated. In the USA, this method has screened millions of preschool children in the past years. Nevertheless, the efficiency of the screening has been contentious. In this dissertation, the technique is reviewed and reexamined. Revisions of photoscreening technique are developed to detect and quantify strabismus, refractive errors, and high-order ocular aberrations. These new optical designs overcome traditional design deficiencies in three areas:
First, a Dynamic Hirschberg Test is conducted to detect strabismus. The test begins with both eyes following a moving fixation target under binocular viewing, and during the test each eye is designed to be unconscientiously occluded which forces refixation in strabismus subjects and reveals latent strabismus. Photoscreening images taken under monocular viewing are used to calculate deviations from the expected binocular eye movement path. A significant eye movement deviation from binocular to monocular viewing indicates the presence of strabismus.
Second, a novel binocular adaptive photorefraction (APR) approach is developed to characterize the retinal reflex intensity profile according to the eye\u27s refractive state. This approach calculates the retinal reflex profile by integrating the retinal reflex intensity from a coaxial and several eccentric photorefraction images. Theoretical simulations evaluate the influence from several human factors. An experimental APR device is constructed with 21 light sources to increase the spherical refraction detection range. The additional light source angular meridians detect astigmatism. The experimentally measured distribution is characterized into relevant parameters to describe the ocular refraction state.
Last, the APR design is further applied to detect vision problems that suffer from high-order aberrations (e.g. cataracts, dry eye, keratoconus). A monocular prototype APR device is constructed with coaxial and eccentric light sources to acquire 13 monocular photorefraction images. Light sources projected inside and along the camera aperture improve the detection sensitivity. The acquired reflex images are then decomposed into Zernike polynomials, and the complex reflex patterns are analyzed using the Zernike coefficient magnitudes
- …