57 research outputs found
ENSURE: A General Approach for Unsupervised Training of Deep Image Reconstruction Algorithms
Image reconstruction using deep learning algorithms offers improved
reconstruction quality and lower reconstruction time than classical compressed
sensing and model-based algorithms. Unfortunately, clean and fully sampled
ground-truth data to train the deep networks is often not available in several
applications, restricting the applicability of the above methods. This work
introduces the ENsemble Stein's Unbiased Risk Estimate (ENSURE) framework as a
general approach to train deep image reconstruction algorithms without fully
sampled and noise-free images. The proposed framework is the generalization of
the classical SURE and GSURE formulation to the setting where the images are
sampled by different measurement operators, chosen randomly from a set. We show
that the ENSURE loss function, which only uses the measurement data, is an
unbiased estimate for the true mean-square error. Our experiments show that the
networks trained with this loss function can offer reconstructions comparable
to the supervised setting. While we demonstrate this framework in the context
of MR image recovery, the ENSURE framework is generally applicable to arbitrary
inverse problems
Unsupervised training of deep learning based image denoisers from undersampled measurements
Department of Electrical EngineeringCompressive sensing is a method to recover the original image from undersampled measurements. In order to overcome the ill-posedness of this inverse problem, image priors are used such as sparsity, minimal total-variation, or self-similarity of images.
Recently, deep learning based compressive image recovery methods have been proposed and have yielded state-of-the-art performances. They used data-driven approaches instead of hand-crafted image priors to regularize ill-posed inverse problems with undersampled data. Ironically, training deep neural networks (DNNs) for them requires ???clean??? ground truth images, but obtaining the best quality images from undersampled data requires well-trained DNNs.
To resolve this dilemma, we propose novel methods based on two well-grounded theories: denoiser-approximate message passing (D-AMP) and Stein???s unbiased risk estimator (SURE). Our proposed methods, LDAMP SURE and LDAMP SURE-T, were able to train deep learning based image denoisers from undersampled measurements without ground truth images and without additional image priors and to recover images with state-of-the-art qualities from undersampled data. We evaluated our methods for various compressive sensing recovery problems with Gaussian random, coded diffraction pattern, and compressive sensing MRI (CS-MRI) measurement matrices. Our proposed methods yielded state-of-the-art performances for all cases without ground truth images. Our methods also yielded comparable performances to the approaches with ground truth data. Moreover, we have extended our methods to deal with a Gaussian noise in a measurement domain and further enhance reconstruction quality by developing an image refining method called LDAMP SURE-FT.clos
Noise2Recon: Enabling Joint MRI Reconstruction and Denoising with Semi-Supervised and Self-Supervised Learning
Deep learning (DL) has shown promise for faster, high quality accelerated MRI
reconstruction. However, supervised DL methods depend on extensive amounts of
fully-sampled (labeled) data and are sensitive to out-of-distribution (OOD)
shifts, particularly low signal-to-noise ratio (SNR) acquisitions. To alleviate
this challenge, we propose Noise2Recon, a model-agnostic, consistency training
method for joint MRI reconstruction and denoising that can use both
fully-sampled (labeled) and undersampled (unlabeled) scans in semi-supervised
and self-supervised settings. With limited or no labeled training data,
Noise2Recon outperforms compressed sensing and deep learning baselines,
including supervised networks, augmentation-based training, fine-tuned
denoisers, and self-supervised methods, and matches performance of supervised
models, which were trained with 14x more fully-sampled scans. Noise2Recon also
outperforms all baselines, including state-of-the-art fine-tuning and
augmentation techniques, among low-SNR scans and when generalizing to other OOD
factors, such as changes in acceleration factors and different datasets.
Augmentation extent and loss weighting hyperparameters had negligible impact on
Noise2Recon compared to supervised methods, which may indicate increased
training stability. Our code is available at https://github.com/ad12/meddlr
- …