8 research outputs found

    VORTEX: Physics-Driven Data Augmentations Using Consistency Training for Robust Accelerated MRI Reconstruction

    Full text link
    Deep neural networks have enabled improved image quality and fast inference times for various inverse problems, including accelerated magnetic resonance imaging (MRI) reconstruction. However, such models require a large number of fully-sampled ground truth datasets, which are difficult to curate, and are sensitive to distribution drifts. In this work, we propose applying physics-driven data augmentations for consistency training that leverage our domain knowledge of the forward MRI data acquisition process and MRI physics to achieve improved label efficiency and robustness to clinically-relevant distribution drifts. Our approach, termed VORTEX, (1) demonstrates strong improvements over supervised baselines with and without data augmentation in robustness to signal-to-noise ratio change and motion corruption in data-limited regimes; (2) considerably outperforms state-of-the-art purely image-based data augmentation techniques and self-supervised reconstruction methods on both in-distribution and out-of-distribution data; and (3) enables composing heterogeneous image-based and physics-driven data augmentations. Our code is available at https://github.com/ad12/meddlr.Comment: Accepted to MIDL 202

    Quantitative magnetic particle imaging monitors the transplantation, biodistribution, and clearance of stem cells in vivo

    No full text
    Stem cell therapies have enormous potential for treating many debilitating diseases, including heart failure, stroke and traumatic brain injury. For maximal efficacy, these therapies require targeted cell delivery to specific tissues followed by successful cell engraftment. However, targeted delivery remains an open challenge. As one example, it is common for intravenous deliveries of mesenchymal stem cells (MSCs) to become entrapped in lung microvasculature instead of the target tissue. Hence, a robust, quantitative imaging method would be essential for developing efficacious cell therapies. Here we show that Magnetic Particle Imaging (MPI), a novel technique that directly images iron-oxide nanoparticle-tagged cells, can longitudinally monitor and quantify MSC administration in vivo. MPI offers near-ideal image contrast, depth penetration, and robustness; these properties make MPI both ultra-sensitive and linearly quantitative. Here, we imaged, for the first time, the dynamic trafficking of intravenous MSC administrations using MPI. Our results indicate that labeled MSC injections are immediately entrapped in lung tissue and then clear to the liver within one day, whereas standard iron oxide particle (Resovist) injections are immediately taken up by liver and spleen. Longitudinal MPI-CT imaging also indicated a clearance half-life of MSC iron oxide labels in the liver at 4.6 days. Finally, our ex vivo MPI biodistribution measurements of iron in liver, spleen, heart, and lungs after injection showed excellent agreement (R2 = 0.943) with measurements from induction coupled plasma spectrometry. These results demonstrate that MPI offers strong utility for noninvasively imaging and quantifying the systemic distribution of cell therapies and other therapeutic agents

    Improving Data-Efficiency and Robustness of Medical Imaging Segmentation Using Inpainting-Based Self-Supervised Learning

    No full text
    We systematically evaluate the training methodology and efficacy of two inpainting-based pretext tasks of context prediction and context restoration for medical image segmentation using self-supervised learning (SSL). Multiple versions of self-supervised U-Net models were trained to segment MRI and CT datasets, each using a different combination of design choices and pretext tasks to determine the effect of these design choices on segmentation performance. The optimal design choices were used to train SSL models that were then compared with baseline supervised models for computing clinically-relevant metrics in label-limited scenarios. We observed that SSL pretraining with context restoration using 32 × 32 patches and Poission-disc sampling, transferring only the pretrained encoder weights, and fine-tuning immediately with an initial learning rate of 1 × 10−3 provided the most benefit over supervised learning for MRI and CT tissue segmentation accuracy (p < 0.001). For both datasets and most label-limited scenarios, scaling the size of unlabeled pretraining data resulted in improved segmentation performance. SSL models pretrained with this amount of data outperformed baseline supervised models in the computation of clinically-relevant metrics, especially when the performance of supervised learning was low. Our results demonstrate that SSL pretraining using inpainting-based pretext tasks can help increase the robustness of models in label-limited scenarios and reduce worst-case errors that occur with supervised learning

    Improving Data-Efficiency and Robustness of Medical Imaging Segmentation Using Inpainting-Based Self-Supervised Learning

    No full text
    We systematically evaluate the training methodology and efficacy of two inpainting-based pretext tasks of context prediction and context restoration for medical image segmentation using self-supervised learning (SSL). Multiple versions of self-supervised U-Net models were trained to segment MRI and CT datasets, each using a different combination of design choices and pretext tasks to determine the effect of these design choices on segmentation performance. The optimal design choices were used to train SSL models that were then compared with baseline supervised models for computing clinically-relevant metrics in label-limited scenarios. We observed that SSL pretraining with context restoration using 32 &times; 32 patches and Poission-disc sampling, transferring only the pretrained encoder weights, and fine-tuning immediately with an initial learning rate of 1 &times; 10&minus;3 provided the most benefit over supervised learning for MRI and CT tissue segmentation accuracy (p&lt; 0.001). For both datasets and most label-limited scenarios, scaling the size of unlabeled pretraining data resulted in improved segmentation performance. SSL models pretrained with this amount of data outperformed baseline supervised models in the computation of clinically-relevant metrics, especially when the performance of supervised learning was low. Our results demonstrate that SSL pretraining using inpainting-based pretext tasks can help increase the robustness of models in label-limited scenarios and reduce worst-case errors that occur with supervised learning
    corecore