38 research outputs found
Deep Generative Adversarial Networks for Compressed Sensing Automates MRI
Magnetic resonance image (MRI) reconstruction is a severely ill-posed linear
inverse task demanding time and resource intensive computations that can
substantially trade off {\it accuracy} for {\it speed} in real-time imaging. In
addition, state-of-the-art compressed sensing (CS) analytics are not cognizant
of the image {\it diagnostic quality}. To cope with these challenges we put
forth a novel CS framework that permeates benefits from generative adversarial
networks (GAN) to train a (low-dimensional) manifold of diagnostic-quality MR
images from historical patients. Leveraging a mixture of least-squares (LS)
GANs and pixel-wise cost, a deep residual network with skip
connections is trained as the generator that learns to remove the {\it
aliasing} artifacts by projecting onto the manifold. LSGAN learns the texture
details, while controls the high-frequency noise. A multilayer
convolutional neural network is then jointly trained based on diagnostic
quality images to discriminate the projection quality. The test phase performs
feed-forward propagation over the generator network that demands a very low
computational overhead. Extensive evaluations are performed on a large
contrast-enhanced MR dataset of pediatric patients. In particular, images rated
based on expert radiologists corroborate that GANCS retrieves high contrast
images with detailed texture relative to conventional CS, and pixel-wise
schemes. In addition, it offers reconstruction under a few milliseconds, two
orders of magnitude faster than state-of-the-art CS-MRI schemes
Motion Corrected Multishot MRI Reconstruction Using Generative Networks with Sensitivity Encoding
Multishot Magnetic Resonance Imaging (MRI) is a promising imaging modality
that can produce a high-resolution image with relatively less data acquisition
time. The downside of multishot MRI is that it is very sensitive to subject
motion and even small amounts of motion during the scan can produce artifacts
in the final MR image that may cause misdiagnosis. Numerous efforts have been
made to address this issue; however, all of these proposals are limited in
terms of how much motion they can correct and the required computational time.
In this paper, we propose a novel generative networks based conjugate gradient
SENSE (CG-SENSE) reconstruction framework for motion correction in multishot
MRI. The proposed framework first employs CG-SENSE reconstruction to produce
the motion-corrupted image and then a generative adversarial network (GAN) is
used to correct the motion artifacts. The proposed method has been rigorously
evaluated on synthetically corrupted data on varying degrees of motion, numbers
of shots, and encoding trajectories. Our analyses (both quantitative as well as
qualitative/visual analysis) establishes that the proposed method significantly
robust and outperforms state-of-the-art motion correction techniques and also
reduces severalfold of computational times.Comment: This paper has been published in Scientific Reports Journa
Unsupervised Reverse Domain Adaptation for Synthetic Medical Images via Adversarial Training
To realize the full potential of deep learning for medical imaging, large
annotated datasets are required for training. Such datasets are difficult to
acquire because labeled medical images are not usually available due to privacy
issues, lack of experts available for annotation, underrepresentation of rare
conditions and poor standardization. Lack of annotated data has been addressed
in conventional vision applications using synthetic images refined via
unsupervised adversarial training to look like real images. However, this
approach is difficult to extend to general medical imaging because of the
complex and diverse set of features found in real human tissues. We propose an
alternative framework that uses a reverse flow, where adversarial training is
used to make real medical images more like synthetic images, and hypothesize
that clinically-relevant features can be preserved via self-regularization.
These domain-adapted images can then be accurately interpreted by networks
trained on large datasets of synthetic medical images. We test this approach
for the notoriously difficult task of depth-estimation from endoscopy. We train
a depth estimator on a large dataset of synthetic images generated using an
accurate forward model of an endoscope and an anatomically-realistic colon.
This network predicts significantly better depths when using synthetic-like
domain-adapted images compared to the real images, confirming that the
clinically-relevant features of depth are preserved.Comment: 10 pages, 8 figur
MRI Image Reconstruction via Learning Optimization Using Neural ODEs
We propose to formulate MRI image reconstruction as an optimization problem
and model the optimization trajectory as a dynamic process using ordinary
differential equations (ODEs). We model the dynamics in ODE with a neural
network and solve the desired ODE with the off-the-shelf (fixed) solver to
obtain reconstructed images. We extend this model and incorporate the knowledge
of off-the-shelf ODE solvers into the network design (learned solvers). We
investigate several models based on three ODE solvers and compare models with
fixed solvers and learned solvers. Our models achieve better reconstruction
results and are more parameter efficient than other popular methods such as
UNet and cascaded CNN. We introduce a new way of tackling the MRI
reconstruction problem by modeling the continuous optimization dynamics using
neural ODEs.Comment: Accepted by MICCAI 202
Learning-based Optimization of the Under-sampling Pattern in MRI
Acquisition of Magnetic Resonance Imaging (MRI) scans can be accelerated by
under-sampling in k-space (i.e., the Fourier domain). In this paper, we
consider the problem of optimizing the sub-sampling pattern in a data-driven
fashion. Since the reconstruction model's performance depends on the
sub-sampling pattern, we combine the two problems. For a given sparsity
constraint, our method optimizes the sub-sampling pattern and reconstruction
model, using an end-to-end learning strategy. Our algorithm learns from
full-resolution data that are under-sampled retrospectively, yielding a
sub-sampling pattern and reconstruction model that are customized to the type
of images represented in the training data. The proposed method, which we call
LOUPE (Learning-based Optimization of the Under-sampling PattErn), was
implemented by modifying a U-Net, a widely-used convolutional neural network
architecture, that we append with the forward model that encodes the
under-sampling process. Our experiments with T1-weighted structural brain MRI
scans show that the optimized sub-sampling pattern can yield significantly more
accurate reconstructions compared to standard random uniform, variable density
or equispaced under-sampling schemes. The code is made available at:
https://github.com/cagladbahadir/LOUPE .Comment: 13 pages, 5 figures, Accepted as a conference paper in IPM
Recurrent Generative Adversarial Networks for Proximal Learning and Automated Compressive Image Recovery
Recovering images from undersampled linear measurements typically leads to an
ill-posed linear inverse problem, that asks for proper statistical priors.
Building effective priors is however challenged by the low train and test
overhead dictated by real-time tasks; and the need for retrieving visually
"plausible" and physically "feasible" images with minimal hallucination. To
cope with these challenges, we design a cascaded network architecture that
unrolls the proximal gradient iterations by permeating benefits from generative
residual networks (ResNet) to modeling the proximal operator. A mixture of
pixel-wise and perceptual costs is then deployed to train proximals. The
overall architecture resembles back-and-forth projection onto the intersection
of feasible and plausible images. Extensive computational experiments are
examined for a global task of reconstructing MR images of pediatric patients,
and a more local task of superresolving CelebA faces, that are insightful to
design efficient architectures. Our observations indicate that for MRI
reconstruction, a recurrent ResNet with a single residual block effectively
learns the proximal. This simple architecture appears to significantly
outperform the alternative deep ResNet architecture by 2dB SNR, and the
conventional compressed-sensing MRI by 4dB SNR with 100x faster inference. For
image superresolution, our preliminary results indicate that modeling the
denoising proximal demands deep ResNets.Comment: 11 pages, 11 figure
Highly Scalable Image Reconstruction using Deep Neural Networks with Bandpass Filtering
To increase the flexibility and scalability of deep neural networks for image
reconstruction, a framework is proposed based on bandpass filtering. For many
applications, sensing measurements are performed indirectly. For example, in
magnetic resonance imaging, data are sampled in the frequency domain. The
introduction of bandpass filtering enables leveraging known imaging physics
while ensuring that the final reconstruction is consistent with actual
measurements to maintain reconstruction accuracy. We demonstrate this flexible
architecture for reconstructing subsampled datasets of MRI scans. The resulting
high subsampling rates increase the speed of MRI acquisitions and enable the
visualization rapid hemodynamics.Comment: 9 pages, 10 figure
Low-Dose CT with Deep Learning Regularization via Proximal Forward Backward Splitting
Low dose X-ray computed tomography (LDCT) is desirable for reduced patient
dose. This work develops image reconstruction methods with deep learning (DL)
regularization for LDCT. Our methods are based on unrolling of proximal
forward-backward splitting (PFBS) framework with data-driven image
regularization via deep neural networks. In contrast with PFBS-IR that utilizes
standard data fidelity updates via iterative reconstruction (IR) method,
PFBS-AIR involves preconditioned data fidelity updates that fuse analytical
reconstruction (AR) method and IR in a synergistic way, I.e. fused analytical
and iterative reconstruction (AIR). The results suggest that DL-regularized
methods (PFBS-IR and PFBS-AIR) provided better reconstruction quality from
conventional wisdoms (AR or IR), and DL-based postprocessing method
(FBPConvNet). In addition, owing to AIR, PFBS-AIR noticeably outperformed
PFBS-IR.Comment: 8pages 6 figure
Deep Adversarial Training for Multi-Organ Nuclei Segmentation in Histopathology Images
Nuclei segmentation is a fundamental task that is critical for various
computational pathology applications including nuclei morphology analysis, cell
type classification, and cancer grading. Conventional vision-based methods for
nuclei segmentation struggle in challenging cases and deep learning approaches
have proven to be more robust and generalizable. However, CNNs require large
amounts of labeled histopathology data. Moreover, conventional CNN-based
approaches lack structured prediction capabilities which are required to
distinguish overlapping and clumped nuclei. Here, we present an approach to
nuclei segmentation that overcomes these challenges by utilizing a conditional
generative adversarial network (cGAN) trained with synthetic and real data. We
generate a large dataset of H&E training images with perfect nuclei
segmentation labels using an unpaired GAN framework. This synthetic data along
with real histopathology data from six different organs are used to train a
conditional GAN with spectral normalization and gradient penalty for nuclei
segmentation. This adversarial regression framework enforces higher order
consistency when compared to conventional CNN models. We demonstrate that this
nuclei segmentation approach generalizes across different organs, sites,
patients and disease states, and outperforms conventional approaches,
especially in isolating individual and overlapping nuclei
Learned Primal-dual Reconstruction
We propose the Learned Primal-Dual algorithm for tomographic reconstruction.
The algorithm accounts for a (possibly non-linear) forward operator in a deep
neural network by unrolling a proximal primal-dual optimization method, but
where the proximal operators have been replaced with convolutional neural
networks. The algorithm is trained end-to-end, working directly from raw
measured data and it does not depend on any initial reconstruction such as FBP.
We compare performance of the proposed method on low dose CT reconstruction
against FBP, TV, and deep learning based post-processing of FBP. For the
Shepp-Logan phantom we obtain >6dB PSNR improvement against all compared
methods. For human phantoms the corresponding improvement is 6.6dB over TV and
2.2dB over learned post-processing along with a substantial improvement in the
SSIM. Finally, our algorithm involves only ten forward-back-projection
computations, making the method feasible for time critical clinical
applications.Comment: 11 pages, 5 figure