691,199 research outputs found

    3D Computational Ghost Imaging

    Full text link
    Computational ghost imaging retrieves the spatial information of a scene using a single pixel detector. By projecting a series of known random patterns and measuring the back reflected intensity for each one, it is possible to reconstruct a 2D image of the scene. In this work we overcome previous limitations of computational ghost imaging and capture the 3D spatial form of an object by using several single pixel detectors in different locations. From each detector we derive a 2D image of the object that appears to be illuminated from a different direction, using only a single digital projector as illumination. Comparing the shading of the images allows the surface gradient and hence the 3D form of the object to be reconstructed. We compare our result to that obtained from a stereo- photogrammetric system utilizing multiple high resolution cameras. Our low cost approach is compatible with consumer applications and can readily be extended to non-visible wavebands.Comment: 13pages, 4figure

    Serial optical coherence microscopy for label-free volumetric histopathology

    Get PDF
    The observation of histopathology using optical microscope is an essential procedure for examination of tissue biopsies or surgically excised specimens in biological and clinical laboratories. However, slide-based microscopic pathology is not suitable for visualizing the large-scale tissue and native 3D organ structure due to its sampling limitation and shallow imaging depth. Here, we demonstrate serial optical coherence microscopy (SOCM) technique that offers label-free, high-throughput, and large-volume imaging of ex vivo mouse organs. A 3D histopathology of whole mouse brain and kidney including blood vessel structure is reconstructed by deep tissue optical imaging in serial sectioning techniques. Our results demonstrate that SOCM has unique advantages as it can visualize both native 3D structures and quantitative regional volume without introduction of any contrast agents

    Axial plane optical microscopy.

    Get PDF
    We present axial plane optical microscopy (APOM) that can, in contrast to conventional microscopy, directly image a sample's cross-section parallel to the optical axis of an objective lens without scanning. APOM combined with conventional microscopy simultaneously provides two orthogonal images of a 3D sample. More importantly, APOM uses only a single lens near the sample to achieve selective-plane illumination microscopy, as we demonstrated by three-dimensional (3D) imaging of fluorescent pollens and brain slices. This technique allows fast, high-contrast, and convenient 3D imaging of structures that are hundreds of microns beneath the surfaces of large biological tissues

    Three-dimensional virtual refocusing of fluorescence microscopy images using deep learning

    Full text link
    Three-dimensional (3D) fluorescence microscopy in general requires axial scanning to capture images of a sample at different planes. Here we demonstrate that a deep convolutional neural network can be trained to virtually refocus a 2D fluorescence image onto user-defined 3D surfaces within the sample volume. With this data-driven computational microscopy framework, we imaged the neuron activity of a Caenorhabditis elegans worm in 3D using a time-sequence of fluorescence images acquired at a single focal plane, digitally increasing the depth-of-field of the microscope by 20-fold without any axial scanning, additional hardware, or a trade-off of imaging resolution or speed. Furthermore, we demonstrate that this learning-based approach can correct for sample drift, tilt, and other image aberrations, all digitally performed after the acquisition of a single fluorescence image. This unique framework also cross-connects different imaging modalities to each other, enabling 3D refocusing of a single wide-field fluorescence image to match confocal microscopy images acquired at different sample planes. This deep learning-based 3D image refocusing method might be transformative for imaging and tracking of 3D biological samples, especially over extended periods of time, mitigating photo-toxicity, sample drift, aberration and defocusing related challenges associated with standard 3D fluorescence microscopy techniques.Comment: 47 pages, 5 figures (main text

    Volume measurement using 3D Range Imaging

    Get PDF
    The use of 3D Range Imaging has widespread applications. One of its applications provides us the information about the volumes of different objects. In this paper, 3D range imaging has been utilised to find out the volumes of different objects using two algorithms that are based on a straightforward means to calculate volume. The algorithms implemented succesfully calculate volume on objects provided that the objects have uniform colour. Objects that have multi-coloured and glossy surfaces provided particular difficulties in determining volume

    Adopting multiview pixel mapping for enhancing quality of holoscopic 3D scene in parallax barriers based holoscopic 3D displays

    Get PDF
    The Autostereoscopic multiview 3D Display is robustly developed and widely available in commercial markets. Excellent improvements are made using pixel mapping techniques and achieved an acceptable 3D resolution with balanced pixel aspect ratio in lens array technology. This paper proposes adopting multiview pixel mapping for enhancing quality constructed holoscopic 3D scene in parallax barriers based holoscopic 3D displays achieving great results. The Holoscopic imaging technology mimics the imaging system of insects, such as the fly, utilizing a single camera, equipped with a large number of micro-lenses, to capture a scene, offering rich parallax information and enhanced 3D feeling without the need of wearing specific eyewear. In addition pixel mapping and holoscopic 3D rendering tools are developed including a custom built holoscopic 3D displays to test the proposed method and carry out a like-to-like comparison.This work has been supported by European Commission under Grant FP7-ICT-2009-4 (3DVIVANT). The authors wish to ex-press their gratitude and thanks for the support given throughout the project
    corecore