71 research outputs found
Deep learning approach to Fourier ptychographic microscopy
Convolutional neural networks (CNNs) have gained tremendous success in solving complex inverse problems. The aim of this work is to develop a novel CNN framework to reconstruct video sequences of dynamic live cells captured using a computational microscopy technique, Fourier ptychographic microscopy (FPM). The unique feature of the FPM is its capability to reconstruct images with both wide field-of-view (FOV) and high resolution, i.e. a large space-bandwidth-product (SBP), by taking a series of low resolution intensity images. For live cell imaging, a single FPM frame contains thousands of cell samples with different morphological features. Our idea is to fully exploit the statistical information provided by these large spatial ensembles so as to make predictions in a sequential measurement, without using any additional temporal dataset. Specifically, we show that it is possible to reconstruct high-SBP dynamic cell videos by a CNN trained only on the first FPM dataset captured at the beginning of a time-series experiment. Our CNN approach reconstructs a 12800×10800 pixel phase image using only ∼25 seconds, a 50× speedup compared to the model-based FPM algorithm. In addition, the CNN further reduces the required number of images in each time frame by ∼ 6×. Overall, this significantly improves the imaging throughput by reducing both the acquisition and computational times. The proposed CNN is based on the conditional generative adversarial network (cGAN) framework. We further propose a mixed loss function that combines the standard image domain loss and a weighted Fourier domain loss, which leads to improved reconstruction of the high frequency information. Additionally, we also exploit transfer learning so that our pre-trained CNN can be further optimized to image other cell types. Our technique demonstrates a promising deep learning approach to continuously monitor large live-cell populations over an extended time and gather useful spatial and temporal information with sub-cellular resolution.We would like to thank NVIDIA Corporation for supporting us with the GeForce Titan Xp through the GPU Grant Program. (NVIDIA Corporation; GeForce Titan Xp through the GPU Grant Program)First author draf
Deep learning approach to Fourier ptychographic microscopy
Convolutional neural networks (CNNs) have gained tremendous success in
solving complex inverse problems. The aim of this work is to develop a novel
CNN framework to reconstruct video sequence of dynamic live cells captured
using a computational microscopy technique, Fourier ptychographic microscopy
(FPM). The unique feature of the FPM is its capability to reconstruct images
with both wide field-of-view (FOV) and high resolution, i.e. a large
space-bandwidth-product (SBP), by taking a series of low resolution intensity
images. For live cell imaging, a single FPM frame contains thousands of cell
samples with different morphological features. Our idea is to fully exploit the
statistical information provided by this large spatial ensemble so as to make
predictions in a sequential measurement, without using any additional temporal
dataset. Specifically, we show that it is possible to reconstruct high-SBP
dynamic cell videos by a CNN trained only on the first FPM dataset captured at
the beginning of a time-series experiment. Our CNN approach reconstructs a
12800X10800 pixels phase image using only ~25 seconds, a 50X speedup compared
to the model-based FPM algorithm. In addition, the CNN further reduces the
required number of images in each time frame by ~6X. Overall, this
significantly improves the imaging throughput by reducing both the acquisition
and computational times. The proposed CNN is based on the conditional
generative adversarial network (cGAN) framework. Additionally, we also exploit
transfer learning so that our pre-trained CNN can be further optimized to image
other cell types. Our technique demonstrates a promising deep learning approach
to continuously monitor large live-cell populations over an extended time and
gather useful spatial and temporal information with sub-cellular resolution
Deep learning in computational microscopy
We propose to use deep convolutional neural networks (DCNNs) to perform 2D and 3D computational imaging. Specifically, we investigate three different applications. We first try to solve the 3D inverse scattering problem based on learning a huge number of training target and speckle pairs. We also demonstrate a new DCNN architecture to perform Fourier ptychographic Microscopy (FPM) reconstruction, which achieves high-resolution phase recovery with considerably less data than standard FPM. Finally, we employ DCNN models that can predict focused 2D fluorescent microscopic images from blurred images captured at overfocused or underfocused planes.Published versio
Phase retrieval from 4-dimensional electron diffraction datasets
We present a computational imaging mode for large scale electron microscopy
data, which retrieves a complex wave from noisy/sparse intensity recordings
using a deep learning approach and subsequently reconstructs an image of the
specimen from the Convolutional Neural Network (CNN) predicted exit waves. We
demonstrate that an appropriate forward model in combination with open data
frameworks can be used to generate large synthetic datasets for training. In
combination with augmenting the data with Poisson noise corresponding to
varying dose-values, we effectively eliminate overfitting issues. The U-NET
based architecture of the CNN is adapted to the task at hand and performs well
while maintaining a relatively small size and fast performance. The validity of
the approach is confirmed by comparing the reconstruction to well-established
methods using simulated, as well as real electron microscopy data. The proposed
method is shown to be effective particularly in the low dose range, evident by
strong suppression of noise, good spatial resolution, and sensitivity to
different atom types, enabling the simultaneous visualisation of light and
heavy elements and making different atomic species distinguishable. Since the
method acts on a very local scale and is comparatively fast it bears the
potential to be used for near-real-time reconstruction during data acquisition.Comment: Accepted conference paper of IEEE ICIP 202
On the use of deep learning for phase recovery
Phase recovery (PR) refers to calculating the phase of the light field from
its intensity measurements. As exemplified from quantitative phase imaging and
coherent diffraction imaging to adaptive optics, PR is essential for
reconstructing the refractive index distribution or topography of an object and
correcting the aberration of an imaging system. In recent years, deep learning
(DL), often implemented through deep neural networks, has provided
unprecedented support for computational imaging, leading to more efficient
solutions for various PR problems. In this review, we first briefly introduce
conventional methods for PR. Then, we review how DL provides support for PR
from the following three stages, namely, pre-processing, in-processing, and
post-processing. We also review how DL is used in phase image processing.
Finally, we summarize the work in DL for PR and outlook on how to better use DL
to improve the reliability and efficiency in PR. Furthermore, we present a
live-updating resource (https://github.com/kqwang/phase-recovery) for readers
to learn more about PR.Comment: 82 pages, 32 figure
- …