145 research outputs found
Deep learning approach to scalable imaging through scattering media
We propose a deep learning technique to exploit ādeep speckle correlationsā. Our work paves the way to a highly scalable deep learning approach for imaging through scattering media.Published versio
Deep speckle correlation: a deep learning approach toward scalable imaging through scattering media
Imaging through scattering is an important yet challenging problem. Tremendous progress has been made by exploiting the deterministic inputāoutput ātransmission matrixā for a fixed medium. However, this āone-to-oneā mapping is highly susceptible to speckle decorrelations ā small perturbations to the scattering medium lead to model errors and severe degradation of the imaging performance. Our goal here is to develop a new framework that is highly scalable to both medium perturbations and measurement requirement. To do so, we propose a statistical āone-to-allā deep learning (DL) technique that encapsulates a wide range of statistical variations for the model to be resilient to speckle decorrelations. Specifically, we develop a convolutional neural network (CNN) that is able to learn the statistical information contained in the speckle intensity patterns captured on a set of diffusers having the same macroscopic parameter. We then show for the first time, to the best of our knowledge, that the trained CNN is able to generalize and make high-quality object predictions through an entirely different set of diffusers of the same class. Our work paves the way to a highly scalable DL approach for imaging through scattering media.National Science Foundation (NSF) (1711156); Directorate for Engineering (ENG). (1711156 - National Science Foundation (NSF); Directorate for Engineering (ENG))First author draf
Illumination coding meets uncertainty learning: toward reliable AI-augmented phase imaging
We propose a physics-assisted deep learning (DL) framework for large space-bandwidth product (SBP) phase imaging. We design an asymmetric coded illumination scheme to encode high-resolution phase information across a wide field-of-view. We then develop a matching DL algorithm to provide large-SBP phase estimation. We show that this illumination coding scheme is highly scalable in achieving flexible resolution, and robust to experimental variations. We demonstrate this technique on both static and dynamic biological samples, and show that it can reliably achieve 5X resolution enhancement across 4X FOVs using only five multiplexed measurements -- more than 10X data reduction over the state-of-the-art. Typical DL algorithms tend to provide over-confident predictions, whose errors are only discovered in hindsight. We develop an uncertainty learning framework to overcome this limitation and provide predictive assessment to the reliability of the DL prediction. We show that the predicted uncertainty maps can be used as a surrogate to the true error. We validate the robustness of our technique by analyzing the model uncertainty. We quantify the effect of noise, model errors, incomplete training data, and "out-of-distribution" testing data by assessing the data uncertainty. We further demonstrate that the predicted credibility maps allow identifying spatially and temporally rare biological events. Our technique enables scalable AI-augmented large-SBP phase imaging with dependable predictions.Published versio
Deep learning approach to Fourier ptychographic microscopy
Convolutional neural networks (CNNs) have gained tremendous success in solving complex inverse problems. The aim of this work is to develop a novel CNN framework to reconstruct video sequences of dynamic live cells captured using a computational microscopy technique, Fourier ptychographic microscopy (FPM). The unique feature of the FPM is its capability to reconstruct images with both wide field-of-view (FOV) and high resolution, i.e. a large space-bandwidth-product (SBP), by taking a series of low resolution intensity images. For live cell imaging, a single FPM frame contains thousands of cell samples with different morphological features. Our idea is to fully exploit the statistical information provided by these large spatial ensembles so as to make predictions in a sequential measurement, without using any additional temporal dataset. Specifically, we show that it is possible to reconstruct high-SBP dynamic cell videos by a CNN trained only on the first FPM dataset captured at the beginning of a time-series experiment. Our CNN approach reconstructs a 12800Ć10800 pixel phase image using only ā¼25 seconds, a 50Ć speedup compared to the model-based FPM algorithm. In addition, the CNN further reduces the required number of images in each time frame by ā¼ 6Ć. Overall, this significantly improves the imaging throughput by reducing both the acquisition and computational times. The proposed CNN is based on the conditional generative adversarial network (cGAN) framework. We further propose a mixed loss function that combines the standard image domain loss and a weighted Fourier domain loss, which leads to improved reconstruction of the high frequency information. Additionally, we also exploit transfer learning so that our pre-trained CNN can be further optimized to image other cell types. Our technique demonstrates a promising deep learning approach to continuously monitor large live-cell populations over an extended time and gather useful spatial and temporal information with sub-cellular resolution.We would like to thank NVIDIA Corporation for supporting us with the GeForce Titan Xp through the GPU Grant Program. (NVIDIA Corporation; GeForce Titan Xp through the GPU Grant Program)First author draf
Deep learning approach to Fourier ptychographic microscopy
Convolutional neural networks (CNNs) have gained tremendous success in
solving complex inverse problems. The aim of this work is to develop a novel
CNN framework to reconstruct video sequence of dynamic live cells captured
using a computational microscopy technique, Fourier ptychographic microscopy
(FPM). The unique feature of the FPM is its capability to reconstruct images
with both wide field-of-view (FOV) and high resolution, i.e. a large
space-bandwidth-product (SBP), by taking a series of low resolution intensity
images. For live cell imaging, a single FPM frame contains thousands of cell
samples with different morphological features. Our idea is to fully exploit the
statistical information provided by this large spatial ensemble so as to make
predictions in a sequential measurement, without using any additional temporal
dataset. Specifically, we show that it is possible to reconstruct high-SBP
dynamic cell videos by a CNN trained only on the first FPM dataset captured at
the beginning of a time-series experiment. Our CNN approach reconstructs a
12800X10800 pixels phase image using only ~25 seconds, a 50X speedup compared
to the model-based FPM algorithm. In addition, the CNN further reduces the
required number of images in each time frame by ~6X. Overall, this
significantly improves the imaging throughput by reducing both the acquisition
and computational times. The proposed CNN is based on the conditional
generative adversarial network (cGAN) framework. Additionally, we also exploit
transfer learning so that our pre-trained CNN can be further optimized to image
other cell types. Our technique demonstrates a promising deep learning approach
to continuously monitor large live-cell populations over an extended time and
gather useful spatial and temporal information with sub-cellular resolution
Computational miniature mesoscope for large-scale 3D fluorescence imaging
Fluorescence imaging is indispensable to biology and neuroscience. The need for large-scale imaging in freely behaving animals has further driven the development in miniaturized microscopes (miniscopes). However, conventional microscopes and miniscopes are inherently constrained by their limited space-bandwidth-product, shallow depth-of-field, and inability to resolve 3D distributed emitters such as neurons. In this thesis, I present a Computational Miniature Mesoscope (CM2) that leverages two computation frameworks to overcome these bottlenecks and enable single-shot 3D imaging across a wide imaging field-of-view (FOV) of 7~8 mm and an extended depth-of-field (DOF) of 0.8~2.5 mm with a high lateral (7 um) and axial resolution (25 um).
The CM2 is a novel fluorescence imaging device that achieves large-scale illumination and single-shot 3D imaging on a compact platform. This expanded imaging capability is enabled by computational imaging that jointly designs optics and algorithms. In this thesis, I present two versions of CM2 platforms and two 3D reconstruction algorithms. In addition, pilot studies of in vivo imaging experiments using a wearable CM2 prototype are conducted to demonstrate the CM2 platform's potential applications in large-scale neural imaging.
First, I present the CM2 V1 platform and a model-based 3D reconstruction algorithm. The CM2 V1 system has a compact lightweight design that integrates a microlens array (MLA) for 3D imaging and an LED array for excitation on a single compact platform. The model-based 3D deconvolution algorithm is developed to perform volumetric reconstructions from single-shot CM2 measurements, achieving 7 um lateral and 200 um axial resolution across a wide 8 mm FOV and 2.5 mm DOF in clear volumes. This mesoscale 3D imaging capability of CM2 is validated on various fluorescent samples, including resolution target, fibers, and particle phantoms in different geometry. I further quantify the effects of bulk scattering and background fluorescence in phantom experiments.
Next, I investigate and improve the CM2 V1 system for both the hardware and the reconstruction algorithm. Specially, the low axial resolution (200 um), insufficient excitation efficiency (24%), and heavy computational cost of the model-based 3D deconvolution hinder CM2 V1's biomedical applications. I present and demonstrate an upgraded CM2 V2 platform augmented with a deep learning-based 3D reconstruction framework, termed CM2Net, to address the above limitations. Specially, the CM2 V2 design features an array of freeform illuminators and hybrid emission filters to achieve 3 times higher excitation efficiency (80%) and 5 times better suppression of background fluorescence, compared to the V1 design. The multi-stage CM2Net combines ideas from view demixing, lightfield refocusing and view synthesis to account for the CM2ās multi-view geometry and achieve reliable 3D reconstruction with high axial resolution.
Finally, trained purely on simulated data, I show that the CM2Net can generalize to experimental measurements. A key element of CM2Net's generalizability is a 3D Linear Shift Variant (LSV) model of CM2 that simulates realistic measurements by accurately incorporating field varying aberrations. I experimentally validate the CM2 V2 platform and CM2Net achieve faster, artifact-free 3D reconstructions across a 7 mm wide FOV and 800 um DOF with 25 um axial and 7 um lateral resolution in phantom experiments.
Compared to the CM2 V1 with model-based deconvolution, the CM2Net achieves a 10 times better axial resolution at 1400 times faster reconstruction speed without sacrificing the imaging FOV or lateral resolution. The new system design of CM2 V2 with the LSV-embedded CM2Net provides an intriguing solution to large-scale fluorescence imagers with a small form factor.
Built from off-the-shelf and 3D printed components, I envision that this low-cost and compact computational imaging system can be adopted in various biomedical and neuroscience labs. The CM2 systems and the developed computational tools can have impact in a wide range of large-scale 3D fluorescence imaging applications
A deep-learning approach for high-speed Fourier ptychographic microscopy
We demonstrate a new convolutional neural network architecture to perform Fourier ptychographic Microscopy (FPM) reconstruction, which achieves high-resolution phase recovery with considerably less data than standard FPM.https://www.researchgate.net/profile/Thanh_Nguyen68/publication/325829575_A_deep-learning_approach_for_high-speed_Fourier_ptychographic_microscopy/links/5b2beec20f7e9b0df5ba4872/A-deep-learning-approach-for-high-speed-Fourier-ptychographic-microscopy.pdfhttps://www.researchgate.net/profile/Thanh_Nguyen68/publication/325829575_A_deep-learning_approach_for_high-speed_Fourier_ptychographic_microscopy/links/5b2beec20f7e9b0df5ba4872/A-deep-learning-approach-for-high-speed-Fourier-ptychographic-microscopy.pdfhttps://www.researchgate.net/profile/Thanh_Nguyen68/publication/325829575_A_deep-learning_approach_for_high-speed_Fourier_ptychographic_microscopy/links/5b2beec20f7e9b0df5ba4872/A-deep-learning-approach-for-high-speed-Fourier-ptychographic-microscopy.pdfhttps://www.researchgate.net/profile/Thanh_Nguyen68/publication/325829575_A_deep-learning_approach_for_high-speed_Fourier_ptychographic_microscopy/links/5b2beec20f7e9b0df5ba4872/A-deep-learning-approach-for-high-speed-Fourier-ptychographic-microscopy.pdfhttps://www.researchgate.net/profile/Thanh_Nguyen68/publication/325829575_A_deep-learning_approach_for_high-speed_Fourier_ptychographic_microscopy/links/5b2beec20f7e9b0df5ba4872/A-deep-learning-approach-for-high-speed-Fourier-ptychographic-microscopy.pdfhttps://www.researchgate.net/profile/Thanh_Nguyen68/publication/325829575_A_deep-learning_approach_for_high-speed_Fourier_ptychographic_microscopy/links/5b2beec20f7e9b0df5ba4872/A-deep-learning-approach-for-high-speed-Fourier-ptychographic-microscopy.pdfPublished versio
Scalable and reliable deep learning for computational microscopy in complex media
Emerging deep learning based computational microscopy techniques promise novel imaging capabilities beyond traditional techniques. In this talk, I will discuss two microscopy applications.
First, high space-bandwidth product microscopy typically requires a large number of measurements. I will present a novel physics-assisted deep learning (DL) framework for large space-bandwidth product (SBP) phase imaging [1], enabling significant reduction of the required measurements, opening up real-time applications. In this technique, we design asymmetric coded illumination patterns to encode high-resolution phase information across a wide field-of-view. We then develop a matching DL algorithm to provide large-SBP phase estimation. We demonstrate this technique on both static and dynamic biological samples, and show that it can reliably achieve 5Ć resolution enhancement across 4Ć FOVs using only five multiplexed measurements. In addition, we develop an uncertainty learning framework to provide predictive assessment to the reliability of the DL prediction. We show that the predicted uncertainty maps can be used as a surrogate to the true error. We validate the robustness of our technique by analyzing the model uncertainty. We quantify the effect of noise, model errors, incomplete training data, and āout-of-distributionā testing data by assessing the data uncertainty. We further demonstrate that the predicted credibility maps allow identifying spatially and temporally rare biological events. Our technique enables scalable DL-augmented large-SBP phase imaging with reliable predictions and uncertainty quantifications.
Second, I will turn to the pervasive problem of imaging in scattering media. I will discuss a new deep learning- based technique that is highly generalizable and resilient to statistical variations of the scattering media [2]. We develop a statistical āone-to-allā deep learning technique that encapsulates a wide range of statistical variations for the model to be resilient to speckle decorrelations. Specifically, we develop a convolutional neural network (CNN) that is able to learn the statistical information contained in the speckle intensity patterns captured on a set of diffusers having the same macroscopic parameter. We then show that the trained CNN is able to generalize and make high-quality object predictions through an entirely different set of diffusers of the same class. Our work paves the way to a highly scalable deep learning approach for imaging through scattering media.
REFERENCES
[1] Xue, Y., Cheng, S., Li, Y., and Tian, L., āIllumination coding meets uncertainty learning: toward reliable ai-augmented phase imaging,ā arXiv:1901.02038 (2019).
[2] Li, Y., Xue, Y., and Tian, L., āDeep speckle correlation: a deep learning approach toward scalable imaging through scattering media,ā Optica 5, 1181 (2018)
- ā¦