76,740 research outputs found
Deep learning approach to Fourier ptychographic microscopy
Convolutional neural networks (CNNs) have gained tremendous success in solving complex inverse problems. The aim of this work is to develop a novel CNN framework to reconstruct video sequences of dynamic live cells captured using a computational microscopy technique, Fourier ptychographic microscopy (FPM). The unique feature of the FPM is its capability to reconstruct images with both wide field-of-view (FOV) and high resolution, i.e. a large space-bandwidth-product (SBP), by taking a series of low resolution intensity images. For live cell imaging, a single FPM frame contains thousands of cell samples with different morphological features. Our idea is to fully exploit the statistical information provided by these large spatial ensembles so as to make predictions in a sequential measurement, without using any additional temporal dataset. Specifically, we show that it is possible to reconstruct high-SBP dynamic cell videos by a CNN trained only on the first FPM dataset captured at the beginning of a time-series experiment. Our CNN approach reconstructs a 12800×10800 pixel phase image using only ∼25 seconds, a 50× speedup compared to the model-based FPM algorithm. In addition, the CNN further reduces the required number of images in each time frame by ∼ 6×. Overall, this significantly improves the imaging throughput by reducing both the acquisition and computational times. The proposed CNN is based on the conditional generative adversarial network (cGAN) framework. We further propose a mixed loss function that combines the standard image domain loss and a weighted Fourier domain loss, which leads to improved reconstruction of the high frequency information. Additionally, we also exploit transfer learning so that our pre-trained CNN can be further optimized to image other cell types. Our technique demonstrates a promising deep learning approach to continuously monitor large live-cell populations over an extended time and gather useful spatial and temporal information with sub-cellular resolution.We would like to thank NVIDIA Corporation for supporting us with the GeForce Titan Xp through the GPU Grant Program. (NVIDIA Corporation; GeForce Titan Xp through the GPU Grant Program)First author draf
A Transfer-Learning Approach for Accelerated MRI using Deep Neural Networks
Purpose: Neural networks have received recent interest for reconstruction of
undersampled MR acquisitions. Ideally network performance should be optimized
by drawing the training and testing data from the same domain. In practice,
however, large datasets comprising hundreds of subjects scanned under a common
protocol are rare. The goal of this study is to introduce a transfer-learning
approach to address the problem of data scarcity in training deep networks for
accelerated MRI.
  Methods: Neural networks were trained on thousands of samples from public
datasets of either natural images or brain MR images. The networks were then
fine-tuned using only few tens of brain MR images in a distinct testing domain.
Domain-transferred networks were compared to networks trained directly in the
testing domain. Network performance was evaluated for varying acceleration
factors (2-10), number of training samples (0.5-4k) and number of fine-tuning
samples (0-100).
  Results: The proposed approach achieves successful domain transfer between MR
images acquired with different contrasts (T1- and T2-weighted images), and
between natural and MR images (ImageNet and T1- or T2-weighted images).
Networks obtained via transfer-learning using only tens of images in the
testing domain achieve nearly identical performance to networks trained
directly in the testing domain using thousands of images.
  Conclusion: The proposed approach might facilitate the use of neural networks
for MRI reconstruction without the need for collection of extensive imaging
datasets
Using Machine-Learning to Optimize phase contrast in a Low-Cost Cellphone Microscope
Cellphones equipped with high-quality cameras and powerful CPUs as well as
GPUs are widespread. This opens new prospects to use such existing
computational and imaging resources to perform medical diagnosis in developing
countries at a very low cost.
  Many relevant samples, like biological cells or waterborn parasites, are
almost fully transparent. As they do not exhibit absorption, but alter the
light's phase only, they are almost invisible in brightfield microscopy.
Expensive equipment and procedures for microscopic contrasting or sample
staining often are not available.
  By applying machine-learning techniques, such as a convolutional neural
network (CNN), it is possible to learn a relationship between samples to be
examined and its optimal light source shapes, in order to increase e.g. phase
contrast, from a given dataset to enable real-time applications. For the
experimental setup, we developed a 3D-printed smartphone microscope for less
than 100 \$ using off-the-shelf components only such as a low-cost video
projector. The fully automated system assures true Koehler illumination with an
LCD as the condenser aperture and a reversed smartphone lens as the microscope
objective. We show that the effect of a varied light source shape, using the
pre-trained CNN, does not only improve the phase contrast, but also the
impression of an improvement in optical resolution without adding any special
optics, as demonstrated by measurements
Learning Wavefront Coding for Extended Depth of Field Imaging
Depth of field is an important factor of imaging systems that highly affects
the quality of the acquired spatial information. Extended depth of field (EDoF)
imaging is a challenging ill-posed problem and has been extensively addressed
in the literature. We propose a computational imaging approach for EDoF, where
we employ wavefront coding via a diffractive optical element (DOE) and we
achieve deblurring through a convolutional neural network. Thanks to the
end-to-end differentiable modeling of optical image formation and computational
post-processing, we jointly optimize the optical design, i.e., DOE, and the
deblurring through standard gradient descent methods. Based on the properties
of the underlying refractive lens and the desired EDoF range, we provide an
analytical expression for the search space of the DOE, which is instrumental in
the convergence of the end-to-end network. We achieve superior EDoF imaging
performance compared to the state of the art, where we demonstrate results with
minimal artifacts in various scenarios, including deep 3D scenes and broadband
imaging
Optimized Quantification of Spin Relaxation Times in the Hybrid State
Purpose: The analysis of optimized spin ensemble trajectories for relaxometry
in the hybrid state.
  Methods: First, we constructed visual representations to elucidate the
differential equation that governs spin dynamics in hybrid state. Subsequently,
numerical optimizations were performed to find spin ensemble trajectories that
minimize the Cram\'er-Rao bound for -encoding, -encoding, and their
weighted sum, respectively, followed by a comparison of the Cram\'er-Rao bounds
obtained with our optimized spin-trajectories, as well as Look-Locker and
multi-spin-echo methods. Finally, we experimentally tested our optimized spin
trajectories with in vivo scans of the human brain.
  Results: After a nonrecurring inversion segment on the southern hemisphere of
the Bloch sphere, all optimized spin trajectories pursue repetitive loops on
the northern half of the sphere in which the beginning of the first and the end
of the last loop deviate from the others. The numerical results obtained in
this work align well with intuitive insights gleaned directly from the
governing equation. Our results suggest that hybrid-state sequences outperform
traditional methods. Moreover, hybrid-state sequences that balance - and
-encoding still result in near optimal signal-to-noise efficiency. Thus,
the second parameter can be encoded at virtually no extra cost.
  Conclusion: We provide insights regarding the optimal encoding processes of
spin relaxation times in order to guide the design of robust and efficient
pulse sequences. We find that joint acquisitions of  and  in the
hybrid state are substantially more efficient than sequential encoding
techniques.Comment: 10 pages, 5 figure
Deep learning approach to Fourier ptychographic microscopy
Convolutional neural networks (CNNs) have gained tremendous success in
solving complex inverse problems. The aim of this work is to develop a novel
CNN framework to reconstruct video sequence of dynamic live cells captured
using a computational microscopy technique, Fourier ptychographic microscopy
(FPM). The unique feature of the FPM is its capability to reconstruct images
with both wide field-of-view (FOV) and high resolution, i.e. a large
space-bandwidth-product (SBP), by taking a series of low resolution intensity
images. For live cell imaging, a single FPM frame contains thousands of cell
samples with different morphological features. Our idea is to fully exploit the
statistical information provided by this large spatial ensemble so as to make
predictions in a sequential measurement, without using any additional temporal
dataset. Specifically, we show that it is possible to reconstruct high-SBP
dynamic cell videos by a CNN trained only on the first FPM dataset captured at
the beginning of a time-series experiment. Our CNN approach reconstructs a
12800X10800 pixels phase image using only ~25 seconds, a 50X speedup compared
to the model-based FPM algorithm. In addition, the CNN further reduces the
required number of images in each time frame by ~6X. Overall, this
significantly improves the imaging throughput by reducing both the acquisition
and computational times. The proposed CNN is based on the conditional
generative adversarial network (cGAN) framework. Additionally, we also exploit
transfer learning so that our pre-trained CNN can be further optimized to image
other cell types. Our technique demonstrates a promising deep learning approach
to continuously monitor large live-cell populations over an extended time and
gather useful spatial and temporal information with sub-cellular resolution
Fat fraction mapping using bSSFP Signal Profile Asymmetries for Robust multi-Compartment Quantification (SPARCQ)
Purpose: To develop a novel quantitative method for detection of different
tissue compartments based on bSSFP signal profile asymmetries (SPARCQ) and to
provide a validation and proof-of-concept for voxel-wise water-fat separation
and fat fraction mapping. Methods: The SPARCQ framework uses phase-cycled bSSFP
acquisitions to obtain bSSFP signal profiles. For each voxel, the profile is
decomposed into a weighted sum of simulated profiles with specific
off-resonance and relaxation time ratios. From the obtained set of weights,
voxel-wise estimations of the fractions of the different components and their
equilibrium magnetization are extracted. For the entire image volume,
component-specific quantitative maps as well as banding-artifact-free images
are generated. A SPARCQ proof-of-concept was provided for water-fat separation
and fat fraction mapping. Noise robustness was assessed using simulations. A
dedicated water-fat phantom was used to validate fat fractions estimated with
SPARCQ against gold-standard 1H MRS. Quantitative maps were obtained in knees
of six healthy volunteers, and SPARCQ repeatability was evaluated in scan
rescan experiments. Results: Simulations showed that fat fraction estimations
are accurate and robust for signal-to-noise ratios above 20. Phantom
experiments showed good agreement between SPARCQ and gold-standard (GS) fat
fractions (fF(SPARCQ) = 1.02*fF(GS) + 0.00235). In volunteers, quantitative
maps and banding-artifact-free water-fat-separated images obtained with SPARCQ
demonstrated the expected contrast between fatty and non-fatty tissues. The
coefficient of repeatability of SPARCQ fat fraction was 0.0512. Conclusion: The
SPARCQ framework was proposed as a novel quantitative mapping technique for
detecting different tissue compartments, and its potential was demonstrated for
quantitative water-fat separation.Comment: 20 pages, 7 figures, submitted to Magnetic Resonance in Medicin
- …
