10,232 research outputs found
Deep learning approach to Fourier ptychographic microscopy
Convolutional neural networks (CNNs) have gained tremendous success in
solving complex inverse problems. The aim of this work is to develop a novel
CNN framework to reconstruct video sequence of dynamic live cells captured
using a computational microscopy technique, Fourier ptychographic microscopy
(FPM). The unique feature of the FPM is its capability to reconstruct images
with both wide field-of-view (FOV) and high resolution, i.e. a large
space-bandwidth-product (SBP), by taking a series of low resolution intensity
images. For live cell imaging, a single FPM frame contains thousands of cell
samples with different morphological features. Our idea is to fully exploit the
statistical information provided by this large spatial ensemble so as to make
predictions in a sequential measurement, without using any additional temporal
dataset. Specifically, we show that it is possible to reconstruct high-SBP
dynamic cell videos by a CNN trained only on the first FPM dataset captured at
the beginning of a time-series experiment. Our CNN approach reconstructs a
12800X10800 pixels phase image using only ~25 seconds, a 50X speedup compared
to the model-based FPM algorithm. In addition, the CNN further reduces the
required number of images in each time frame by ~6X. Overall, this
significantly improves the imaging throughput by reducing both the acquisition
and computational times. The proposed CNN is based on the conditional
generative adversarial network (cGAN) framework. Additionally, we also exploit
transfer learning so that our pre-trained CNN can be further optimized to image
other cell types. Our technique demonstrates a promising deep learning approach
to continuously monitor large live-cell populations over an extended time and
gather useful spatial and temporal information with sub-cellular resolution
Deep learning approach to Fourier ptychographic microscopy
Convolutional neural networks (CNNs) have gained tremendous success in solving complex inverse problems. The aim of this work is to develop a novel CNN framework to reconstruct video sequences of dynamic live cells captured using a computational microscopy technique, Fourier ptychographic microscopy (FPM). The unique feature of the FPM is its capability to reconstruct images with both wide field-of-view (FOV) and high resolution, i.e. a large space-bandwidth-product (SBP), by taking a series of low resolution intensity images. For live cell imaging, a single FPM frame contains thousands of cell samples with different morphological features. Our idea is to fully exploit the statistical information provided by these large spatial ensembles so as to make predictions in a sequential measurement, without using any additional temporal dataset. Specifically, we show that it is possible to reconstruct high-SBP dynamic cell videos by a CNN trained only on the first FPM dataset captured at the beginning of a time-series experiment. Our CNN approach reconstructs a 12800×10800 pixel phase image using only ∼25 seconds, a 50× speedup compared to the model-based FPM algorithm. In addition, the CNN further reduces the required number of images in each time frame by ∼ 6×. Overall, this significantly improves the imaging throughput by reducing both the acquisition and computational times. The proposed CNN is based on the conditional generative adversarial network (cGAN) framework. We further propose a mixed loss function that combines the standard image domain loss and a weighted Fourier domain loss, which leads to improved reconstruction of the high frequency information. Additionally, we also exploit transfer learning so that our pre-trained CNN can be further optimized to image other cell types. Our technique demonstrates a promising deep learning approach to continuously monitor large live-cell populations over an extended time and gather useful spatial and temporal information with sub-cellular resolution.We would like to thank NVIDIA Corporation for supporting us with the GeForce Titan Xp through the GPU Grant Program. (NVIDIA Corporation; GeForce Titan Xp through the GPU Grant Program)First author draf
High-resolution ab initio three-dimensional X-ray diffraction microscopy
Coherent X-ray diffraction microscopy is a method of imaging non-periodic
isolated objects at resolutions only limited, in principle, by the largest
scattering angles recorded. We demonstrate X-ray diffraction imaging with high
resolution in all three dimensions, as determined by a quantitative analysis of
the reconstructed volume images. These images are retrieved from the 3D
diffraction data using no a priori knowledge about the shape or composition of
the object, which has never before been demonstrated on a non-periodic object.
We also construct 2D images of thick objects with infinite depth of focus
(without loss of transverse spatial resolution). These methods can be used to
image biological and materials science samples at high resolution using X-ray
undulator radiation, and establishes the techniques to be used in
atomic-resolution ultrafast imaging at X-ray free-electron laser sources.Comment: 22 pages, 11 figures, submitte
Linear-scaling algorithm for rapid computation of inelastic transitions in the presence of multiple electron scattering
Strong multiple scattering of the probe in scanning transmission electron microscopy (STEM) means image simulations are usually required for quantitative interpretation and analysis of elemental maps produced by electron energy-loss spectroscopy (EELS). These simulations require a full quantum-mechanical treatment of multiple scattering of the electron beam, both before and after a core-level inelastic transition. Current algorithms scale quadratically and can take up to a week to calculate on desktop machines even for simple crystal unit cells and do not scale well to the nanoscale heterogeneous systems that are often of interest to materials science researchers. We introduce an algorithm with linear scaling that typically results in an order of magnitude reduction in computation time for these calculations without introducing additional error and discuss approximations that further improve computational scaling for larger-scale objects with modest penalties in calculation error. We demonstrate these speedups by calculating the atomic resolution STEM-EELS map using the L-edge transition of Fe, for a nanoparticle 80 Ã… in diameter, in 16 hours, a calculation that would have taken at least 80 days using a conventional multislice approach
- …