943 research outputs found
Conditional Variational Autoencoder for Learned Image Reconstruction
Learned image reconstruction techniques using deep neural networks have recently gained popularity and have delivered promising empirical results. However, most approaches focus on one single recovery for each observation, and thus neglect information uncertainty. In this work, we develop a novel computational framework that approximates the posterior distribution of the unknown image at each query observation. The proposed framework is very flexible: it handles implicit noise models and priors, it incorporates the data formation process (i.e., the forward operator), and the learned reconstructive properties are transferable between different datasets. Once the network is trained using the conditional variational autoencoder loss, it provides a computationally efficient sampler for the approximate posterior distribution via feed-forward propagation, and the summarizing statistics of the generated samples are used for both point-estimation and uncertainty quantification. We illustrate the proposed framework with extensive numerical experiments on positron emission tomography (with both moderate and low-count levels) showing that the framework generates high-quality samples when compared with state-of-the-art methods
Image reconstruction through compressive sampling matching pursuit and curvelet transform
An interesting area of research is image reconstruction, which uses algorithms and techniques to transform a degraded image into a good one. The quality of the reconstructed image plays a vital role in the field of image processing. Compressive Sampling is an innovative and rapidly growing method for reconstructing signals. It is extensively used in image reconstruction. The literature uses a variety of matching pursuits for image reconstruction. In this paper, we propose a modified method named compressive sampling matching pursuit (CoSaMP) for image reconstruction that promises to sample sparse signals from far fewer observations than the signal’s dimension. The main advantage of CoSaMP is that it has an excellent theoretical guarantee for convergence. The proposed technique combines CoSaMP with curvelet transform for better reconstruction of image. Experiments are carried out to evaluate the proposed technique on different test images. The results indicate that qualitative and quantitative performance is better compared to existing methods
Contrastive Diffusion Model with Auxiliary Guidance for Coarse-to-Fine PET Reconstruction
To obtain high-quality positron emission tomography (PET) scans while
reducing radiation exposure to the human body, various approaches have been
proposed to reconstruct standard-dose PET (SPET) images from low-dose PET
(LPET) images. One widely adopted technique is the generative adversarial
networks (GANs), yet recently, diffusion probabilistic models (DPMs) have
emerged as a compelling alternative due to their improved sample quality and
higher log-likelihood scores compared to GANs. Despite this, DPMs suffer from
two major drawbacks in real clinical settings, i.e., the computationally
expensive sampling process and the insufficient preservation of correspondence
between the conditioning LPET image and the reconstructed PET (RPET) image. To
address the above limitations, this paper presents a coarse-to-fine PET
reconstruction framework that consists of a coarse prediction module (CPM) and
an iterative refinement module (IRM). The CPM generates a coarse PET image via
a deterministic process, and the IRM samples the residual iteratively. By
delegating most of the computational overhead to the CPM, the overall sampling
speed of our method can be significantly improved. Furthermore, two additional
strategies, i.e., an auxiliary guidance strategy and a contrastive diffusion
strategy, are proposed and integrated into the reconstruction process, which
can enhance the correspondence between the LPET image and the RPET image,
further improving clinical reliability. Extensive experiments on two human
brain PET datasets demonstrate that our method outperforms the state-of-the-art
PET reconstruction methods. The source code is available at
\url{https://github.com/Show-han/PET-Reconstruction}.Comment: Accepted and presented in MICCAI 2023. To be published in Proceeding
Scalable Bayesian inversion with Poisson data
Poisson data arise in many important inverse problems, e.g., medical imaging. The stochastic nature of noisy observation processes and imprecise prior information implies that there exists an ensemble of solutions consistent with the given Poisson data to various extents. Existing approaches, e.g., maximum likelihood and penalised maximum likelihood, incorporate the statistical information for point estimates, but fail to provide the important uncertainty information of various possible solu- tions. While full Bayesian approaches can solve this problem, the posterior distributions are often intractable due to their complicated form and the curse of dimensionality. In this thesis, we investigate approximate Bayesian inference techniques, i.e., variational inference (VI), expectation propagation (EP) and Bayesian deep learning (BDL), for scalable posterior exploration. The scalability relies on leveraging 1) mathematical structures emerging in the problems, i.e., the low rank structure of forward operators and the rank 1 projection form of factors in the posterior distribution, and 2) efficient feed forward processes of neural networks and further reduced training time by flexibility of dimensions with incorporating forward and adjoint operators. Apart from the scalability, we also address theoretical analysis, algorithmic design and practical implementation. For VI, we derive explicit functional form and analyse the convergence of algorithms, which are long-standing problems in the literature. For EP, we discuss how to incorporate nonnegative constraints and how to design stable moment evaluation schemes, which are vital and nontrivial practical concerns. For BDL, specifically conditional variational auto-encoders (CVAEs), we investigate how to apply them for uncertainty quantification of inverse problems and develop flexible and novel frameworks for general Bayesian Inversion. Finally, we justify these contributions with numerical experiments and show the competitiveness of our proposed methods by comparing with state-of-the-art benchmarks
FedFTN: Personalized Federated Learning with Deep Feature Transformation Network for Multi-institutional Low-count PET Denoising
Low-count PET is an efficient way to reduce radiation exposure and
acquisition time, but the reconstructed images often suffer from low
signal-to-noise ratio (SNR), thus affecting diagnosis and other downstream
tasks. Recent advances in deep learning have shown great potential in improving
low-count PET image quality, but acquiring a large, centralized, and diverse
dataset from multiple institutions for training a robust model is difficult due
to privacy and security concerns of patient data. Moreover, low-count PET data
at different institutions may have different data distribution, thus requiring
personalized models. While previous federated learning (FL) algorithms enable
multi-institution collaborative training without the need of aggregating local
data, addressing the large domain shift in the application of
multi-institutional low-count PET denoising remains a challenge and is still
highly under-explored. In this work, we propose FedFTN, a personalized
federated learning strategy that addresses these challenges. FedFTN uses a
local deep feature transformation network (FTN) to modulate the feature outputs
of a globally shared denoising network, enabling personalized low-count PET
denoising for each institution. During the federated learning process, only the
denoising network's weights are communicated and aggregated, while the FTN
remains at the local institutions for feature transformation. We evaluated our
method using a large-scale dataset of multi-institutional low-count PET imaging
data from three medical centers located across three continents, and showed
that FedFTN provides high-quality low-count PET images, outperforming previous
baseline FL reconstruction methods across all low-count levels at all three
institutions.Comment: 13 pages, 6 figures, Accepted at Medical Image Analysis Journal
(MedIA
Joint multi-contrast Variational Network reconstruction (jVN) with application to rapid 2D and 3D imaging
Purpose: To improve the image quality of highly accelerated multi-channel MRI
data by learning a joint variational network that reconstructs multiple
clinical contrasts jointly.
Methods: Data from our multi-contrast acquisition was embedded into the
variational network architecture where shared anatomical information is
exchanged by mixing the input contrasts. Complementary k-space sampling across
imaging contrasts and Bunch-Phase/Wave-Encoding were used for data acquisition
to improve the reconstruction at high accelerations. At 3T, our joint
variational network approach across T1w, T2w and T2-FLAIR-weighted brain scans
was tested for retrospective under-sampling at R=6 (2D) and R=4x4 (3D)
acceleration. Prospective acceleration was also performed for 3D data where the
combined acquisition time for whole brain coverage at 1 mm isotropic resolution
across three contrasts was less than three minutes.
Results: Across all test datasets, our joint multi-contrast network better
preserved fine anatomical details with reduced image-blurring when compared to
the corresponding single-contrast reconstructions. Improvement in image quality
was also obtained through complementary k-space sampling and
Bunch-Phase/Wave-Encoding where the synergistic combination yielded the overall
best performance as evidenced by exemplarily slices and quantitative error
metrics.
Conclusion: By leveraging shared anatomical structures across the jointly
reconstructed scans, our joint multi-contrast approach learnt more efficient
regularizers which helped to retain natural image appearance and avoid
over-smoothing. When synergistically combined with advanced encoding
techniques, the performance was further improved, enabling up to R=16-fold
acceleration with good image quality. This should help pave the way to very
rapid high-resolution brain exams
Contrastive Learning MRI Reconstruction
Purpose: We propose a novel contrastive learning latent space representation
for MRI datasets with partially acquired scans. We show that this latent space
can be utilized for accelerated MR image reconstruction.
Theory and Methods: Our novel framework, referred to as COLADA (stands for
Contrastive Learning for highly accelerated MR image reconstruction), maximizes
the mutual information between differently accelerated images of an MRI scan by
using self-supervised contrastive learning. In other words, it attempts to
"pull" the latent representations of the same scan together and "push" the
latent representations of other scans away. The generated MRI latent space is
subsequently utilized for MR image reconstruction and the performance was
assessed in comparison to several baseline deep learning reconstruction
methods. Furthermore, the quality of the proposed latent space representation
was analyzed using Alignment and Uniformity.
Results: COLADA comprehensively outperformed other reconstruction methods
with robustness to variations in undersampling patterns, pathological
abnormalities, and noise in k-space during inference. COLADA proved the high
quality of reconstruction on unseen data with minimal fine-tuning. The analysis
of representation quality suggests that the contrastive features produced by
COLADA are optimally distributed in latent space.
Conclusion: To the best of our knowledge, this is the first attempt to
utilize contrastive learning on differently accelerated images for MR image
reconstruction. The proposed latent space representation has practical usage
due to a large number of existing partially sampled datasets. This implies the
possibility of exploring self-supervised contrastive learning further to
enhance the latent space of MRI for image reconstruction
- …