213 research outputs found

    TPSDicyc: Improved deformation invariant cross-domain medical image synthesis

    Get PDF
    Cycle-consistent generative adversarial network (CycleGAN) has been widely used for cross-domain medical image systhesis tasks particularly due to its ability to deal with unpaired data. However, most CycleGAN-based synthesis methods can not achieve good alignment between the synthesized images and data from the source domain, even with additional image alignment losses. This is because the CycleGAN generator network can encode the relative deformations and noises associated to different domains. This can be detrimental for the downstream applications that rely on the synthesized images, such as generating pseudo-CT for PET-MR attenuation correction. In this paper, we present a deformation invariant model based on the deformation-invariant CycleGAN (DicycleGAN) architecture and the spatial transformation network (STN) using thin-plate-spline (TPS). The proposed method can be trained with unpaired and unaligned data, and generate synthesised images aligned with the source data. Robustness to the presence of relative deformations between data from the source and target domain has been evaluated through experiments on multi-sequence brain MR data and multi-modality abdominal CT and MR data. Experiment results demonstrated that our method can achieve better alignment between the source and target data while maintaining superior image quality of signal compared to several state-of-the-art CycleGAN-based methods

    Three-dimensional Segmentation of the Scoliotic Spine from MRI using Unsupervised Volume-based MR-CT Synthesis

    Full text link
    Vertebral bone segmentation from magnetic resonance (MR) images is a challenging task. Due to the inherent nature of the modality to emphasize soft tissues of the body, common thresholding algorithms are ineffective in detecting bones in MR images. On the other hand, it is relatively easier to segment bones from CT images because of the high contrast between bones and the surrounding regions. For this reason, we perform a cross-modality synthesis between MR and CT domains for simple thresholding-based segmentation of the vertebral bones. However, this implicitly assumes the availability of paired MR-CT data, which is rare, especially in the case of scoliotic patients. In this paper, we present a completely unsupervised, fully three-dimensional (3D) cross-modality synthesis method for segmenting scoliotic spines. A 3D CycleGAN model is trained for an unpaired volume-to-volume translation across MR and CT domains. Then, the Otsu thresholding algorithm is applied to the synthesized CT volumes for easy segmentation of the vertebral bones. The resulting segmentation is used to reconstruct a 3D model of the spine. We validate our method on 28 scoliotic vertebrae in 3 patients by computing the point-to-surface mean distance between the landmark points for each vertebra obtained from pre-operative X-rays and the surface of the segmented vertebra. Our study results in a mean error of 3.41 ±\pm 1.06 mm. Based on qualitative and quantitative results, we conclude that our method is able to obtain a good segmentation and 3D reconstruction of scoliotic spines, all after training from unpaired data in an unsupervised manner.Comment: To appear in the Proceedings of the SPIE Medical Imaging Conference 2021, San Diego, CA. 9 pages, 4 figures in tota

    Deep learning based synthetic CT from cone beam CT generation for abdominal paediatric radiotherapy

    Get PDF
    Objective: Adaptive radiotherapy workflows require images with the quality of computed tomography (CT) for re-calculation and re-optimisation of radiation doses. In this work we aim to improve quality of cone beam CT (CBCT) images for dose calculation using deep learning. / Approach: We propose a novel framework for CBCT-to-CT synthesis using cycle-consistent Generative 10 Adversarial Networks (cycleGANs). The framework was tailored for paediatric abdominal patients, a challenging application due to the inter-fractional variability in bowel filling and smaller patient numbers. We introduced the concept of global residuals only learning to the networks and modified the cycleGAN loss function to explicitly promote structural consistency between source and synthetic images. Finally, to compensate for the anatomical variability and address the difficulties in collecting large datasets in the 15 paediatric population, we applied a smart 2D slice selection based on the common field-of-view across the dataset (abdomen). This acted as a weakly paired data approach that allowed us to take advantage of scans from patients treated for a variety of malignancies (thoracic-abdominal-pelvic) for training purposes. We first optimised the proposed framework and benchmarked its performance on a development dataset. Later, a comprehensive quantitative evaluation was performed on an unseen 20 dataset, which included calculating global image similarity metrics, segmentation-based measures and proton therapy-specific metrics. / Main results: We found improved performance, compared to a baseline implementation, on imagesimilarity metrics such as Mean Absolute Error calculated for a matched virtual CT (55.0±16.6 proposed vs 58.9±16.8 baseline). There was also a higher level of structural agreement for gastrointestinal gas 25 between source and synthetic images measured through dice similarity overlap (0.872±0.053 proposed vs 0.846±0.052 baseline). Differences found in water-equivalent thickness metrics were also smaller for our method (3.3±2.4% proposed vs 3.7±2.8% baseline). / Significance: Our findings indicate that our innovations to the cycleGAN framework improved the quality and structure consistency of the synthetic CTs generated

    CycleGAN Models for MRI Image Translation

    Full text link
    Image-to-image translation has gained popularity in the medical field to transform images from one domain to another. Medical image synthesis via domain transformation is advantageous in its ability to augment an image dataset where images for a given class is limited. From the learning perspective, this process contributes to data-oriented robustness of the model by inherently broadening the model's exposure to more diverse visual data and enabling it to learn more generalized features. In the case of generating additional neuroimages, it is advantageous to obtain unidentifiable medical data and augment smaller annotated datasets. This study proposes the development of a CycleGAN model for translating neuroimages from one field strength to another (e.g., 3 Tesla to 1.5). This model was compared to a model based on DCGAN architecture. CycleGAN was able to generate the synthetic and reconstructed images with reasonable accuracy. The mapping function from the source (3 Tesla) to target domain (1.5 Tesla) performed optimally with an average PSNR value of 25.69 ±\pm 2.49 dB and an MAE value of 2106.27 ±\pm 1218.37.Comment: Accepted and presented in ACML PRHA 2023 worksho
    • …
    corecore