162 research outputs found

    Adversarial Inpainting of Medical Image Modalities

    Full text link
    Numerous factors could lead to partial deteriorations of medical images. For example, metallic implants will lead to localized perturbations in MRI scans. This will affect further post-processing tasks such as attenuation correction in PET/MRI or radiation therapy planning. In this work, we propose the inpainting of medical images via Generative Adversarial Networks (GANs). The proposed framework incorporates two patch-based discriminator networks with additional style and perceptual losses for the inpainting of missing information in realistically detailed and contextually consistent manner. The proposed framework outperformed other natural image inpainting techniques both qualitatively and quantitatively on two different medical modalities.Comment: To be submitted to ICASSP 201

    Retrospective correction of Rigid and Non-Rigid MR motion artifacts using GANs

    Full text link
    Motion artifacts are a primary source of magnetic resonance (MR) image quality deterioration with strong repercussions on diagnostic performance. Currently, MR motion correction is carried out either prospectively, with the help of motion tracking systems, or retrospectively by mainly utilizing computationally expensive iterative algorithms. In this paper, we utilize a new adversarial framework, titled MedGAN, for the joint retrospective correction of rigid and non-rigid motion artifacts in different body regions and without the need for a reference image. MedGAN utilizes a unique combination of non-adversarial losses and a new generator architecture to capture the textures and fine-detailed structures of the desired artifact-free MR images. Quantitative and qualitative comparisons with other adversarial techniques have illustrated the proposed model performance.Comment: 5 pages, 2 figures, under review for the IEEE International Symposium for Biomedical Image

    MedGAN: Medical Image Translation using GANs

    Full text link
    Image-to-image translation is considered a new frontier in the field of medical image analysis, with numerous potential applications. However, a large portion of recent approaches offers individualized solutions based on specialized task-specific architectures or require refinement through non-end-to-end training. In this paper, we propose a new framework, named MedGAN, for medical image-to-image translation which operates on the image level in an end-to-end manner. MedGAN builds upon recent advances in the field of generative adversarial networks (GANs) by merging the adversarial framework with a new combination of non-adversarial losses. We utilize a discriminator network as a trainable feature extractor which penalizes the discrepancy between the translated medical images and the desired modalities. Moreover, style-transfer losses are utilized to match the textures and fine-structures of the desired target images to the translated images. Additionally, we present a new generator architecture, titled CasNet, which enhances the sharpness of the translated medical outputs through progressive refinement via encoder-decoder pairs. Without any application-specific modifications, we apply MedGAN on three different tasks: PET-CT translation, correction of MR motion artefacts and PET image denoising. Perceptual analysis by radiologists and quantitative evaluations illustrate that the MedGAN outperforms other existing translation approaches.Comment: 16 pages, 8 figure

    ipA-MedGAN: Inpainting of Arbitrary Regions in Medical Imaging

    Full text link
    Local deformations in medical modalities are common phenomena due to a multitude of factors such as metallic implants or limited field of views in magnetic resonance imaging (MRI). Completion of the missing or distorted regions is of special interest for automatic image analysis frameworks to enhance post-processing tasks such as segmentation or classification. In this work, we propose a new generative framework for medical image inpainting, titled ipA-MedGAN. It bypasses the limitations of previous frameworks by enabling inpainting of arbitrary shaped regions without a prior localization of the regions of interest. Thorough qualitative and quantitative comparisons with other inpainting and translational approaches have illustrated the superior performance of the proposed framework for the task of brain MR inpainting.Comment: Submitted to IEEE ICIP 202

    Effect of acquisition techniques, latest kernels, and advanced monoenergetic post-processing for stent visualization with third-generation dual-source CT

    Get PDF
    PURPOSEThe purpose of this study is to systematically evaluate the effect of tube voltage, current kernels, and monoenergetic post-processing on stent visualization.METHODSA 6 mm chrome-cobalt peripheral stent was placed in a dedicated phantom and scanned with the available tube voltage settings of a third-generation dual-source scanner in single-energy (SE) and dual-energy (DE) mode. Images were reconstructed using the latest convolution kernels and monoenergetic reconstructions (40-190 keV) for DE. The sharpness of stent struts (S), struts width (SW), contrast-to-noise-ratios (CNR), and pseudoenhancement (PE) between the vessel with and without stent were analyzed using an in-house built automatic analysis tool. Measurements were standardized through calculated z-scores. Z-scores were combined for stent (SQ), luminal (LQ), and overall depiction quality (OQ) by adding S and SW, CNR and SW and PE, and S and SW and CNR and PE. Two readers rated overall stent depiction on a 5-point Likert-scale. Agreement was calculated using linear-weighted kappa. Correlations were calculated using Spearman correlation coefficient.RESULTSMaximum values of S and CNR were 169.1 HU/pixel for [DE; 100/ Sn 150 kV; Qr59; 40 keV] and 50.0 for [SE; 70 kV; Bv36]. Minimum values of SW and PE were 2.615 mm for [DE; 80 to 90/ Sn 150 kV; Qr59; 140 to 190 keV] and 0.12 HU for [DE; 80/ Sn 150 kV; Qr36; 190 keV]. Best combined z-scores of SQ, LQ, and OQ were 4.53 for [DE; 100/ Sn 150 kV; Qr 59; 40 keV], 1.23 for [DE; 100/ Sn 150 kV; Qr59; 140 keV] and 2.95 for [DE; 90/ Sn 150 kV; Qr59; 50 keV]. Best OQ of SE was ranked third with 2.89 for [SE; 90 kV; Bv59]. Subjective agreement was excellent (kappa=0.86; P < .001) and correlated well with OQ (rs=0.94, P < .001).CONCLUSIONCombining DE computed tomography (CT) acquisition with the latest kernels and monoenergetic post-processing allows for improved stent visualization as compared with SECT. The best overall results were obtained for monoenergetic reconstructions with 50 keV from DECT 90/Sn 150 kV acquisitions using kernel Qr59

    The Spatial Relationship between Apparent Diffusion Coefficient and Standardized Uptake Value of 18

    Get PDF
    The minimum apparent diffusion coefficient (ADCmin) derived from diffusion-weighted MRI (DW-MRI) and the maximum standardized uptake value (SUVmax) of FDG-PET are markers of aggressiveness in lung cancer. The numeric correlation of the two parameters has been extensively studied, but their spatial interplay is not well understood. After FDG-PET and DW-MRI coregistration, values and location of ADCmin- and SUVmax-voxels were analyzed. The upper limit of the 95% confidence interval for registration accuracy of sequential PET/MRI was 12 mm, and the mean distance (D) between ADCmin- and SUVmax-voxels was 14.0 mm (average of two readers). Spatial mismatch (D > 12 mm) between ADCmin and SUVmax was found in 9/25 patients. A considerable number of mismatch cases (65%) was also seen in a control group that underwent simultaneous PET/MRI. In the entire patient cohort, no statistically significant correlation between SUVmax and ADCmin was seen, while a moderate negative linear relationship (r=-0.5) between SUVmax and ADCmin was observed in tumors with a spatial match (D ≤ 12 mm). In conclusion, spatial mismatch between ADCmin and SUVmax is found in a considerable percentage of patients. The spatial connection of the two parameters SUVmax and ADCmin has a crucial influence on their numeric correlation
    corecore