9 research outputs found
Multi-Mask Self-Supervised Learning for Physics-Guided Neural Networks in Highly Accelerated MRI
Purpose: To develop an improved self-supervised learning strategy that
efficiently uses the acquired data for training a physics-guided reconstruction
network without a database of fully-sampled data.
Methods: Currently self-supervised learning for physics-guided reconstruction
networks splits acquired undersampled data into two disjoint sets, where one is
used for data consistency (DC) in the unrolled network and the other to define
the training loss. The proposed multi-mask self-supervised learning via data
undersampling (SSDU) splits acquired measurements into multiple pairs of
disjoint sets for each training sample, while using one of these sets for DC
units and the other for defining loss, thereby more efficiently using the
undersampled data. Multi-mask SSDU is applied on fully-sampled 3D knee and
prospectively undersampled 3D brain MRI datasets, which are retrospectively
subsampled to acceleration rate (R)=8, and compared to CG-SENSE and single-mask
SSDU DL-MRI, as well as supervised DL-MRI when fully-sampled data is available.
Results: Results on knee MRI show that the proposed multi-mask SSDU
outperforms SSDU and performs closely with supervised DL-MRI, while
significantly outperforming CG-SENSE. A clinical reader study further ranks the
multi-mask SSDU higher than supervised DL-MRI in terms of SNR and aliasing
artifacts. Results on brain MRI show that multi-mask SSDU achieves better
reconstruction quality compared to SSDU and CG-SENSE. Reader study demonstrates
that multi-mask SSDU at R=8 significantly improves reconstruction compared to
single-mask SSDU at R=8, as well as CG-SENSE at R=2.
Conclusion: The proposed multi-mask SSDU approach enables improved training
of physics-guided neural networks without fully-sampled data, by enabling
efficient use of the undersampled data with multiple masks
Learned SVD: solving inverse problems via hybrid autoencoding
Our world is full of physics-driven data where effective mappings between
data manifolds are desired. There is an increasing demand for understanding
combined model-based and data-driven methods. We propose a nonlinear, learned
singular value decomposition (L-SVD), which combines autoencoders that
simultaneously learn and connect latent codes for desired signals and given
measurements. We provide a convergence analysis for a specifically structured
L-SVD that acts as a regularisation method. In a more general setting, we
investigate the topic of model reduction via data dimensionality reduction to
obtain a regularised inversion. We present a promising direction for solving
inverse problems in cases where the underlying physics are not fully understood
or have very complex behaviour. We show that the building blocks of learned
inversion maps can be obtained automatically, with improved performance upon
classical methods and better interpretability than black-box methods
Unified Supervised-Unsupervised (SUPER) Learning for X-ray CT Image Reconstruction
Traditional model-based image reconstruction (MBIR) methods combine forward
and noise models with simple object priors. Recent machine learning methods for
image reconstruction typically involve supervised learning or unsupervised
learning, both of which have their advantages and disadvantages. In this work,
we propose a unified supervised-unsupervised (SUPER) learning framework for
X-ray computed tomography (CT) image reconstruction. The proposed learning
formulation combines both unsupervised learning-based priors (or even simple
analytical priors) together with (supervised) deep network-based priors in a
unified MBIR framework based on a fixed point iteration analysis. The proposed
training algorithm is also an approximate scheme for a bilevel supervised
training optimization problem, wherein the network-based regularizer in the
lower-level MBIR problem is optimized using an upper-level reconstruction loss.
The training problem is optimized by alternating between updating the network
weights and iteratively updating the reconstructions based on those weights. We
demonstrate the learned SUPER models' efficacy for low-dose CT image
reconstruction, for which we use the NIH AAPM Mayo Clinic Low Dose CT Grand
Challenge dataset for training and testing. In our experiments, we studied
different combinations of supervised deep network priors and unsupervised
learning-based or analytical priors. Both numerical and visual results show the
superiority of the proposed unified SUPER methods over standalone supervised
learning-based methods, iterative MBIR methods, and variations of SUPER
obtained via ablation studies. We also show that the proposed algorithm
converges rapidly in practice.Comment: 15 pages, 16 figures, submitted journal pape