233 research outputs found

    A Learnable Variational Model for Joint Multimodal MRI Reconstruction and Synthesis

    Full text link
    Generating multi-contrasts/modal MRI of the same anatomy enriches diagnostic information but is limited in practice due to excessive data acquisition time. In this paper, we propose a novel deep-learning model for joint reconstruction and synthesis of multi-modal MRI using incomplete k-space data of several source modalities as inputs. The output of our model includes reconstructed images of the source modalities and high-quality image synthesized in the target modality. Our proposed model is formulated as a variational problem that leverages several learnable modality-specific feature extractors and a multimodal synthesis module. We propose a learnable optimization algorithm to solve this model, which induces a multi-phase network whose parameters can be trained using multi-modal MRI data. Moreover, a bilevel-optimization framework is employed for robust parameter training. We demonstrate the effectiveness of our approach using extensive numerical experiments.Comment: 12 page

    Local Implicit Neural Representations for Multi-Sequence MRI Translation

    Full text link
    In radiological practice, multi-sequence MRI is routinely acquired to characterize anatomy and tissue. However, due to the heterogeneity of imaging protocols and contra-indications to contrast agents, some MRI sequences, e.g. contrast-enhanced T1-weighted image (T1ce), may not be acquired. This creates difficulties for large-scale clinical studies for which heterogeneous datasets are aggregated. Modern deep learning techniques have demonstrated the capability of synthesizing missing sequences from existing sequences, through learning from an extensive multi-sequence MRI dataset. In this paper, we propose a novel MR image translation solution based on local implicit neural representations. We split the available MRI sequences into local patches and assign to each patch a local multi-layer perceptron (MLP) that represents a patch in the T1ce. The parameters of these local MLPs are generated by a hypernetwork based on image features. Experimental results and ablation studies on the BraTS challenge dataset showed that the local MLPs are critical for recovering fine image and tumor details, as they allow for local specialization that is highly important for accurate image translation. Compared to a classical pix2pix model, the proposed method demonstrated visual improvement and significantly improved quantitative scores (MSE 0.86 x 10^-3 vs. 1.02 x 10^-3 and SSIM 94.9 vs 94.3

    ResViT: Residual vision transformers for multi-modal medical image synthesis

    Full text link
    Multi-modal imaging is a key healthcare technology that is often underutilized due to costs associated with multiple separate scans. This limitation yields the need for synthesis of unacquired modalities from the subset of available modalities. In recent years, generative adversarial network (GAN) models with superior depiction of structural details have been established as state-of-the-art in numerous medical image synthesis tasks. GANs are characteristically based on convolutional neural network (CNN) backbones that perform local processing with compact filters. This inductive bias in turn compromises learning of contextual features. Here, we propose a novel generative adversarial approach for medical image synthesis, ResViT, to combine local precision of convolution operators with contextual sensitivity of vision transformers. ResViT employs a central bottleneck comprising novel aggregated residual transformer (ART) blocks that synergistically combine convolutional and transformer modules. Comprehensive demonstrations are performed for synthesizing missing sequences in multi-contrast MRI, and CT images from MRI. Our results indicate superiority of ResViT against competing methods in terms of qualitative observations and quantitative metrics

    Brain MRI Tumor Segmentation with Adversarial Networks

    Get PDF
    Deep Learning is a promising approach to either automate or simplify several tasks in the healthcare domain. In this work, we introduce SegAN-CAT, an approach to brain tumor segmentation in Magnetic Resonance Images (MRI), based on Adversarial Networks. In particular, we extend SegAN, successfully applied to the same task in a previous work, in two respects: (i) we used a different model input and (ii) we employed a modified loss function to train the model. We tested our approach on two large datasets, made available by the Brain Tumor Image Segmentation Benchmark (BraTS). First, we trained and tested some segmentation models assuming the availability of all the major MRI contrast modalities, i.e., T1-weighted, T1 weighted contrast-enhanced, T2-weighted, and T2-FLAIR. However, as these four modalities are not always all available for each patient, we also trained and tested four segmentation models that take as input MRIs acquired only with a single contrast modality. Finally, we proposed to apply transfer learning across different contrast modalities to improve the performance of these single-modality models. Our results are promising and show that not SegAN-CAT is able to outperform SegAN when all the four modalities are available, but also that transfer learning can actually lead to better performances when only a single modality is available

    Enhanced Magnetic Resonance Image Synthesis with Contrast-Aware Generative Adversarial Networks

    Full text link
    A Magnetic Resonance Imaging (MRI) exam typically consists of the acquisition of multiple MR pulse sequences, which are required for a reliable diagnosis. Each sequence can be parameterized through multiple acquisition parameters affecting MR image contrast, signal-to-noise ratio, resolution, or scan time. With the rise of generative deep learning models, approaches for the synthesis of MR images are developed to either synthesize additional MR contrasts, generate synthetic data, or augment existing data for AI training. However, current generative approaches for the synthesis of MR images are only trained on images with a specific set of acquisition parameter values, limiting the clinical value of these methods as various sets of acquisition parameter settings are used in clinical practice. Therefore, we trained a generative adversarial network (GAN) to generate synthetic MR knee images conditioned on various acquisition parameters (repetition time, echo time, image orientation). This approach enables us to synthesize MR images with adjustable image contrast. In a visual Turing test, two experts mislabeled 40.5% of real and synthetic MR images, demonstrating that the image quality of the generated synthetic and real MR images is comparable. This work can support radiologists and technologists during the parameterization of MR sequences by previewing the yielded MR contrast, can serve as a valuable tool for radiology training, and can be used for customized data generation to support AI training
    • …
    corecore