10,762 research outputs found
Simultaneous use of Individual and Joint Regularization Terms in Compressive Sensing: Joint Reconstruction of Multi-Channel Multi-Contrast MRI Acquisitions
Purpose: A time-efficient strategy to acquire high-quality multi-contrast
images is to reconstruct undersampled data with joint regularization terms that
leverage common information across contrasts. However, these terms can cause
leakage of uncommon features among contrasts, compromising diagnostic utility.
The goal of this study is to develop a compressive sensing method for
multi-channel multi-contrast magnetic resonance imaging (MRI) that optimally
utilizes shared information while preventing feature leakage.
Theory: Joint regularization terms group sparsity and colour total variation
are used to exploit common features across images while individual sparsity and
total variation are also used to prevent leakage of distinct features across
contrasts. The multi-channel multi-contrast reconstruction problem is solved
via a fast algorithm based on Alternating Direction Method of Multipliers.
Methods: The proposed method is compared against using only individual and
only joint regularization terms in reconstruction. Comparisons were performed
on single-channel simulated and multi-channel in-vivo datasets in terms of
reconstruction quality and neuroradiologist reader scores.
Results: The proposed method demonstrates rapid convergence and improved
image quality for both simulated and in-vivo datasets. Furthermore, while
reconstructions that solely use joint regularization terms are prone to
leakage-of-features, the proposed method reliably avoids leakage via
simultaneous use of joint and individual terms.
Conclusion: The proposed compressive sensing method performs fast
reconstruction of multi-channel multi-contrast MRI data with improved image
quality. It offers reliability against feature leakage in joint
reconstructions, thereby holding great promise for clinical use.Comment: 13 pages, 13 figures. Submitted for possible publicatio
Fat fraction mapping using bSSFP Signal Profile Asymmetries for Robust multi-Compartment Quantification (SPARCQ)
Purpose: To develop a novel quantitative method for detection of different
tissue compartments based on bSSFP signal profile asymmetries (SPARCQ) and to
provide a validation and proof-of-concept for voxel-wise water-fat separation
and fat fraction mapping. Methods: The SPARCQ framework uses phase-cycled bSSFP
acquisitions to obtain bSSFP signal profiles. For each voxel, the profile is
decomposed into a weighted sum of simulated profiles with specific
off-resonance and relaxation time ratios. From the obtained set of weights,
voxel-wise estimations of the fractions of the different components and their
equilibrium magnetization are extracted. For the entire image volume,
component-specific quantitative maps as well as banding-artifact-free images
are generated. A SPARCQ proof-of-concept was provided for water-fat separation
and fat fraction mapping. Noise robustness was assessed using simulations. A
dedicated water-fat phantom was used to validate fat fractions estimated with
SPARCQ against gold-standard 1H MRS. Quantitative maps were obtained in knees
of six healthy volunteers, and SPARCQ repeatability was evaluated in scan
rescan experiments. Results: Simulations showed that fat fraction estimations
are accurate and robust for signal-to-noise ratios above 20. Phantom
experiments showed good agreement between SPARCQ and gold-standard (GS) fat
fractions (fF(SPARCQ) = 1.02*fF(GS) + 0.00235). In volunteers, quantitative
maps and banding-artifact-free water-fat-separated images obtained with SPARCQ
demonstrated the expected contrast between fatty and non-fatty tissues. The
coefficient of repeatability of SPARCQ fat fraction was 0.0512. Conclusion: The
SPARCQ framework was proposed as a novel quantitative mapping technique for
detecting different tissue compartments, and its potential was demonstrated for
quantitative water-fat separation.Comment: 20 pages, 7 figures, submitted to Magnetic Resonance in Medicin
- …