1,389,069 research outputs found

    Learning Wavefront Coding for Extended Depth of Field Imaging

    Get PDF
    Depth of field is an important factor of imaging systems that highly affects the quality of the acquired spatial information. Extended depth of field (EDoF) imaging is a challenging ill-posed problem and has been extensively addressed in the literature. We propose a computational imaging approach for EDoF, where we employ wavefront coding via a diffractive optical element (DOE) and we achieve deblurring through a convolutional neural network. Thanks to the end-to-end differentiable modeling of optical image formation and computational post-processing, we jointly optimize the optical design, i.e., DOE, and the deblurring through standard gradient descent methods. Based on the properties of the underlying refractive lens and the desired EDoF range, we provide an analytical expression for the search space of the DOE, which is instrumental in the convergence of the end-to-end network. We achieve superior EDoF imaging performance compared to the state of the art, where we demonstrate results with minimal artifacts in various scenarios, including deep 3D scenes and broadband imaging

    Virtual Microscopy with Extended Depth of Field

    Get PDF
    In this paper, we describe a virtual microscope system, based on JPEG 2000, which utilizes extended depth of field (EDF) imaging. Through a series of observer trials we show that EDF imaging improves both the local image quality of individual fields of view (FOV) and the accuracy with which the FOVs can be mosaiced (stitched) together. In addition, we estimate the required bit rate to adequately render a set of histology and cytology specimens at a quality suitable for on-line learning and collaboration. We show that, using JPEG 2000, we can efficiently represent high-quality, high-resolution colour images of microscopic specimens with less than 1 bit per pixel

    Optical ptychography with extended depth of field

    Get PDF
    Ptychography is an increasingly popular phase imaging technique. However, like any imaging technique it has a depth of field that limits the volume of a thick specimen that can be imaged in focus. Here, we have proposed to extend the depth of field using a multislice calculation model; an optical experiment successfully demonstrates our proposal

    Video-rate 3D Particle Tracking with Extended Depth-of-field in Thick Biological Samples

    Get PDF
    We present a single-aperture 3D particle localisation and tracking technique with a vastly increased depth-of-field without compromising optical resolution and throughput. Flow measurements in a FEP capillary and a zebrafish blood vessel are demonstrated experimentally

    Computational localization microscopy with extended axial range

    Get PDF
    A new single-aperture 3D particle-localization and tracking technique is presented that demonstrates an increase in depth range by more than an order of magnitude without compromising optical resolution and throughput. We exploit the extended depth range and depth-dependent translation of an Airy-beam PSF for 3D localization over an extended volume in a single snapshot. The technique is applicable to all bright-field and fluorescence modalities for particle localization and tracking, ranging from super-resolution microscopy through to the tracking of fluorescent beads and endogenous particles within cells. We demonstrate and validate its application to real-time 3D velocity imaging of fluid flow in capillaries using fluorescent tracer beads. An axial localization precision of 50 nm was obtained over a depth range of 120μm using a 0.4NA, 20× microscope objective. We believe this to be the highest ratio of axial range-to-precision reported to date

    Time-sequential Pipelined Imaging with Wavefront Coding and Super Resolution

    Get PDF
    Wavefront coding has long offered the prospect of mitigating optical aberrations and extended depth of field, but image quality and noise performance are inevitably reduced. We report on progress in the use of agile encoding and pipelined fusion of image sequences to recover image quality

    Shaded-Mask Filtering for Extended Depth-of-Field Microscopy

    Get PDF
    This paper proposes a new spatial filtering approach for increasing the depth-of-field (DOF) of imaging systems, which is very useful for obtaining sharp images for a wide range of axial positions of the object. Many different techniques have been reported to increase the depth of field. However the main advantage in our method is its simplicity, since we propose the use of purely absorbing beam-shaping elements, which allows a high focal depth with a minimum modification of the optical architecture. In the filter design, we have used the analogy between the axial behavior of a system with spherical aberration and the transverse impulse response of a 1D defocused system. This allowed us the design of a ring-shaded filter. Finally, experimental verification of the theoretical statements is also provided

    Deep Eyes: Binocular Depth-from-Focus on Focal Stack Pairs

    Full text link
    Human visual system relies on both binocular stereo cues and monocular focusness cues to gain effective 3D perception. In computer vision, the two problems are traditionally solved in separate tracks. In this paper, we present a unified learning-based technique that simultaneously uses both types of cues for depth inference. Specifically, we use a pair of focal stacks as input to emulate human perception. We first construct a comprehensive focal stack training dataset synthesized by depth-guided light field rendering. We then construct three individual networks: a Focus-Net to extract depth from a single focal stack, a EDoF-Net to obtain the extended depth of field (EDoF) image from the focal stack, and a Stereo-Net to conduct stereo matching. We show how to integrate them into a unified BDfF-Net to obtain high-quality depth maps. Comprehensive experiments show that our approach outperforms the state-of-the-art in both accuracy and speed and effectively emulates human vision systems

    S-CANDELS: The Spitzer-Cosmic Assembly Near-Infrared Deep Extragalactic Survey. Survey Design, Photometry, and Deep IRAC Source Counts

    Get PDF
    The Spitzer-Cosmic Assembly Deep Near-Infrared Extragalactic Legacy Survey (S-CANDELS; PI G. Fazio) is a Cycle 8 Exploration Program designed to detect galaxies at very high redshifts (z > 5). To mitigate the effects of cosmic variance and also to take advantage of deep coextensive coverage in multiple bands by the Hubble Space Telescope Multi-Cycle Treasury Program CANDELS, S-CANDELS was carried out within five widely separated extragalactic fields: the UKIDSS Ultra-Deep Survey, the Extended Chandra Deep Field South, COSMOS, the HST Deep Field North, and the Extended Groth Strip. S-CANDELS builds upon the existing coverage of these fields from the Spitzer Extended Deep Survey (SEDS) by increasing the integration time from 12 hours to a total of 50 hours but within a smaller area, 0.16 square degrees. The additional depth significantly increases the survey completeness at faint magnitudes. This paper describes the S-CANDELS survey design, processing, and publicly-available data products. We present IRAC dual-band 3.6+4.5 micron catalogs reaching to a depth of 26.5 AB mag. Deep IRAC counts for the roughly 135,000 galaxies detected by S-CANDELS are consistent with models based on known galaxy populations. The increase in depth beyond earlier Spitzer/IRAC surveys does not reveal a significant additional contribution from discrete sources to the diffuse Cosmic Infrared Background (CIB). Thus it remains true that only roughly half of the estimated CIB flux from COBE/DIRBE is resolved.Comment: 23 pages, 19 figures, accepted by ApJ
    corecore