59 research outputs found

    Deep DIH : Statistically Inferred Reconstruction of Digital In-Line Holography by Deep Learning

    Full text link
    Digital in-line holography is commonly used to reconstruct 3D images from 2D holograms for microscopic objects. One of the technical challenges that arise in the signal processing stage is removing the twin image that is caused by the phase-conjugate wavefront from the recorded holograms. Twin image removal is typically formulated as a non-linear inverse problem due to the irreversible scattering process when generating the hologram. Recently, end-to-end deep learning-based methods have been utilized to reconstruct the object wavefront (as a surrogate for the 3D structure of the object) directly from a single-shot in-line digital hologram. However, massive data pairs are required to train deep learning models for acceptable reconstruction precision. In contrast to typical image processing problems, well-curated datasets for in-line digital holography does not exist. Also, the trained model highly influenced by the morphological properties of the object and hence can vary for different applications. Therefore, data collection can be prohibitively cumbersome in practice as a major hindrance to using deep learning for digital holography. In this paper, we proposed a novel implementation of autoencoder-based deep learning architecture for single-shot hologram reconstruction solely based on the current sample without the need for massive datasets to train the model. The simulations results demonstrate the superior performance of the proposed method compared to the state of the art single-shot compressive digital in-line hologram reconstruction method

    Volumetric performance capture from minimal camera viewpoints

    Get PDF
    We present a convolutional autoencoder that enables high fidelity volumetric reconstructions of human performance to be captured from multi-view video comprising only a small set of camera views. Our method yields similar end-to-end reconstruction error to that of a probabilistic visual hull computed using significantly more (double or more) viewpoints. We use a deep prior implicitly learned by the autoencoder trained over a dataset of view-ablated multi-view video footage of a wide range of subjects and actions. This opens up the possibility of high-end volumetric performance capture in on-set and prosumer scenarios where time or cost prohibit a high witness camera count

    From Hours to Seconds: Towards 100x Faster Quantitative Phase Imaging via Differentiable Microscopy

    Full text link
    With applications ranging from metabolomics to histopathology, quantitative phase microscopy (QPM) is a powerful label-free imaging modality. Despite significant advances in fast multiplexed imaging sensors and deep-learning-based inverse solvers, the throughput of QPM is currently limited by the speed of electronic hardware. Complementarily, to improve throughput further, here we propose to acquire images in a compressed form such that more information can be transferred beyond the existing electronic hardware bottleneck. To this end, we present a learnable optical compression-decompression framework that learns content-specific features. The proposed differentiable quantitative phase microscopy (∂μ\partial \mu) first uses learnable optical feature extractors as image compressors. The intensity representation produced by these networks is then captured by the imaging sensor. Finally, a reconstruction network running on electronic hardware decompresses the QPM images. In numerical experiments, the proposed system achieves compression of ×\times 64 while maintaining the SSIM of ∼0.90\sim 0.90 and PSNR of ∼30\sim 30 dB on cells. The results demonstrated by our experiments open up a new pathway for achieving end-to-end optimized (i.e., optics and electronic) compact QPM systems that may provide unprecedented throughput improvements

    Restoring Application Traffic of Latency-Sensitive Networked Systems using Adversarial Autoencoders

    Get PDF
    The Internet of Things (IoT), coupled with the edge computing paradigm, is enabling several pervasive networked applications with stringent real-time requirements, such as telemedicine and haptic telecommunications. Recent advances in network virtualization and artificial intelligence are helping solve network latency and capacity problems, learning from several states of the network stack. However, despite such advances, a network architecture able to meet the demands of next-generation networked applications with stringent real-time requirements still has untackled challenges. In this paper, we argue that only using network (or transport) layer information to predict traffic evolution and other network states may be insufficient, and a more holistic approach that considers predictions of application-layer states is needed to repair the inefficiencies of the TCP/IP architecture. Based on this intuition, we present the design and implementation of Reparo. At its core, the design of our solution is based on the detection of a packet loss and its restoration using a Hidden Markov Model (HMM) empowered with adversarial autoencoders. In our evaluation, we considered a telemedicine use case, specifically a telepathology session, in which a microscope is controlled remotely in real-time to assess histological imagery. Our results confirm that the use of adversarial autoencoders enhances the accuracy of the prediction method satisfying our telemedicine application’s requirements with a notable improvement in terms of throughput and latency perceived by the user

    Improvements in Digital Holographic Microscopy

    Get PDF
    The Ph.D. dissertation consists of developing a series of innovative computational methods for improving digital holographic microscopy (DHM). DHM systems are widely used in quantitative phase imaging for studying micrometer-size biological and non-biological samples. As any imaging technique, DHM systems have limitations that reduce their applicability. Current limitations in DHM systems are: i) the number of holograms (more than three holograms) required in slightly off-axis DHM systems to reconstruct the object phase information without applying complex computational algorithms; ii) the lack of an automatic and robust computation algorithm to compensate for the interference angle and reconstruct the object phase information without phase distortions in off-axis DHM systems operating in telecentric and image plane conditions; iii) the necessity of an automatic computational algorithm to simultaneously compensate for the interference angle and numerically focus out-of-focus holograms on reconstructing the object phase information without phase distortions in off-axis DHM systems operating in telecentric regime; iv) the deficiency of reconstructing phase images without phase distortions at video-rate speed in off-axis DHM operating in telecentric regime, and image plane conditions; v) the lack of an open-source library for any DHM optical configuration; and, finally, vi) the tradeoff between speckle contrast and spatial resolution existing in current computational strategies to reduce the speckle contrast. This Ph.D. dissertation is motivated to overcome or at least reduce the six limitations mentioned above. Each chapter of this dissertation presents and discusses a novel computational method from the theoretical and experimental point of view to address each of these limitations

    All-optical image denoising using a diffractive visual processor

    Full text link
    Image denoising, one of the essential inverse problems, targets to remove noise/artifacts from input images. In general, digital image denoising algorithms, executed on computers, present latency due to several iterations implemented in, e.g., graphics processing units (GPUs). While deep learning-enabled methods can operate non-iteratively, they also introduce latency and impose a significant computational burden, leading to increased power consumption. Here, we introduce an analog diffractive image denoiser to all-optically and non-iteratively clean various forms of noise and artifacts from input images - implemented at the speed of light propagation within a thin diffractive visual processor. This all-optical image denoiser comprises passive transmissive layers optimized using deep learning to physically scatter the optical modes that represent various noise features, causing them to miss the output image Field-of-View (FoV) while retaining the object features of interest. Our results show that these diffractive denoisers can efficiently remove salt and pepper noise and image rendering-related spatial artifacts from input phase or intensity images while achieving an output power efficiency of ~30-40%. We experimentally demonstrated the effectiveness of this analog denoiser architecture using a 3D-printed diffractive visual processor operating at the terahertz spectrum. Owing to their speed, power-efficiency, and minimal computational overhead, all-optical diffractive denoisers can be transformative for various image display and projection systems, including, e.g., holographic displays.Comment: 21 Pages, 7 Figure
    • …
    corecore