91 research outputs found

    CONE-BEAM COMPUTED TOMOGRAPHY (CBCT) SEGMENTATION BY ADVERSARIAL LEARNING DOMAIN ADAPTATION

    Get PDF
    Cone-beam computed tomography (CBCT) is increasingly used in radiotherapy for patient alignment and adaptive therapy where organ segmentation and target delineation are often required. However, due to the poor image quality, low soft tissue contrast, as well as the difficulty in acquiring segmentation labels on CBCT images, developing effective segmentation methods on CBCT has been a challenge. In this thesis, we propose a deep model for segmenting organs in CBCT images without requiring labelled training CBCT images. By taking advantage of the available segmented computed tomography (CT) images, our adversarial learning domain adaptation method aims to synthesize CBCT images from CT images. Then the segmentation labels of the CT images can help train a deep segmentation network for CBCT images, using both CTs with labels and CBCTs without labels. Our adversarial learning domain adaptation is integrated with the CBCT segmentation network training with the designed loss functions. The synthesized CBCT images by pixel-level domain adaptation best capture the critical image features that help achieve accurate CBCT segmentation. Our experiments on the bladder images from Radiation Oncology clinics at the University of Texas Southwestern Medical School (UTSW) have shown that our CBCT segmentation with adversarial learning domain adaptation significantly improves segmentation accuracy compared to the existing methods without doing domain adaptation from CT to CBCT

    Unsupervised domain adaptation for bladder segmentation by U-net in Cone Beam CT

    Get PDF
    The main goal of this project is to accomplish the automatic segmentation of CBCT radiotherapy images using Deep Learning. Why? Because they are used in some process to treat cancer, in order to analyze where the radiation has to be applied. And we think that, by segmenting the images, the treatment applied to the patient could be executed more precisely, so the healthy tissues and organs around the tumor area would be less affected. This project is supervised by Benoît Macq and Eliott Brion, from ICTEAM research group at UCLouvain.Introduction: Radiotherapy is a medical treatment used to control or kill cancerous cells in cancer patients. At the beginning of the treatment planning process, the patient takes a CT scan in order to plan his radiation dose and, sometimes, he can take a Cone Beam CT scan some days after, right before receiving his radiation, to adjust his couch position for the delivery. The main difference between CT and CBCT scans is that the first one has a higher quality and contrast, and the second one is taken directly at the isocenter. As the treatment planning takes several days, when the patient receives his radiation his organs might not be in the same position as they were at the beginning, so the healthy tissues around the tumor area can receive more radiation than what it was planned and get damaged. Our aim is to implement an automatic segmentation of the bladder in CBCT 3D images using deep learning, in order to get a clearer idea of the position of those organs. Materials and Methods: In order to implement the segmentation we performed unsu- pervised domain adaptation between CT (the source) and CBCT 3D images (the target), as we didn?t have a large labeled CBCT dataset but we did for CT, as their segmenta- tion is already part of the treatment planning process. We have used a subset dataset of 120 patients: 60 CT and 60 CBCT of the male pelvic region. We have implemented a deep learning network using Unet as a segmenter and a regular CNN as a domain discriminator (an adversarial network), which also includes a gradient reversal layer. We have used the Dice score coefficient and the Hausdorff distance in order to evaluate the performance of our network and compare it with some previous works developed in the same field. Results: We have performed three main experiments, for which we have obtained the following DSC and Hausdorff distance (in voxels): (i) lower boundary: 0.383±0.260 and 36.47, (ii) upper boundary: 0.717±0.177 and 27.44, (iii) unsupervised domain adapta- tion: 0.623±0.149 and 39.18. With this implementation we have closed the gap between training the network only on CT and only on CBCT by a 72%. Conclusions: Cone Beam CT image segmentation using unsupervised domain adap- tation proves to be an improving methodology in radiotherapy and presents different applications in other fields

    Deep learning based synthetic CT from cone beam CT generation for abdominal paediatric radiotherapy

    Get PDF
    Objective: Adaptive radiotherapy workflows require images with the quality of computed tomography (CT) for re-calculation and re-optimisation of radiation doses. In this work we aim to improve quality of cone beam CT (CBCT) images for dose calculation using deep learning. / Approach: We propose a novel framework for CBCT-to-CT synthesis using cycle-consistent Generative 10 Adversarial Networks (cycleGANs). The framework was tailored for paediatric abdominal patients, a challenging application due to the inter-fractional variability in bowel filling and smaller patient numbers. We introduced the concept of global residuals only learning to the networks and modified the cycleGAN loss function to explicitly promote structural consistency between source and synthetic images. Finally, to compensate for the anatomical variability and address the difficulties in collecting large datasets in the 15 paediatric population, we applied a smart 2D slice selection based on the common field-of-view across the dataset (abdomen). This acted as a weakly paired data approach that allowed us to take advantage of scans from patients treated for a variety of malignancies (thoracic-abdominal-pelvic) for training purposes. We first optimised the proposed framework and benchmarked its performance on a development dataset. Later, a comprehensive quantitative evaluation was performed on an unseen 20 dataset, which included calculating global image similarity metrics, segmentation-based measures and proton therapy-specific metrics. / Main results: We found improved performance, compared to a baseline implementation, on imagesimilarity metrics such as Mean Absolute Error calculated for a matched virtual CT (55.0±16.6 proposed vs 58.9±16.8 baseline). There was also a higher level of structural agreement for gastrointestinal gas 25 between source and synthetic images measured through dice similarity overlap (0.872±0.053 proposed vs 0.846±0.052 baseline). Differences found in water-equivalent thickness metrics were also smaller for our method (3.3±2.4% proposed vs 3.7±2.8% baseline). / Significance: Our findings indicate that our innovations to the cycleGAN framework improved the quality and structure consistency of the synthetic CTs generated

    Feasibility of CycleGAN enhanced low dose CBCT imaging for prostate radiotherapy dose calculation

    Get PDF
    Daily cone beam computed tomography (CBCT) imaging during the course of fractionated radiotherapy treatment can enable online adaptive radiotherapy but also expose patients to a non-negligible amount of radiation dose. This work investigates the feasibility of low dose CBCT imaging capable of enabling accurate prostate radiotherapy dose calculation with only 25% projections by overcoming under-sampling artifacts and correcting CT numbers by employing cycle-consistent generative adversarial networks (cycleGAN). Uncorrected CBCTs of 41 prostate cancer patients, acquired with ∼350 projections (CBCTorg), were retrospectively under-sampled to 25% dose images (CBCTLD) with only ∼90 projections and reconstructed using Feldkamp–Davis–Kress. We adapted a cycleGAN including shape loss to translate CBCTLD into planning CT (pCT) equivalent images (CBCTLD_GAN). An alternative cycleGAN with a generator residual connection was implemented to improve anatomical fidelity (CBCTLD_ResGAN). Unpaired 4-fold cross-validation (33 patients) was performed to allow using the median of 4 models as output. Deformable image registration was used to generate virtual CTs (vCT) for Hounsfield units (HU) accuracy evaluation on 8 additional test patients. Volumetric modulated arc therapy plans were optimized on vCT, and recalculated on CBCTLD_GAN and CBCTLD_ResGAN to determine dose calculation accuracy. CBCTLD_GAN, CBCTLD_ResGAN and CBCTorg were registered to pCT and residual shifts were analyzed. Bladder and rectum were manually contoured on CBCTLD_GAN, CBCTLD_ResGAN and CBCTorg and compared in terms of Dice similarity coefficient (DSC), average and 95th percentile Hausdorff distance (HDavg, HD95). The mean absolute error decreased from 126 HU for CBCTLD to 55 HU for CBCTLD_GAN and 44 HU for CBCTLD_ResGAN. For PTV, the median differences of D98%, D50% and D2% comparing both CBCTLD_GAN to vCT were 0.3%, 0.3%, 0.3%, and comparing CBCTLD_ResGAN to vCT were 0.4%, 0.3% and 0.4%. Dose accuracy was high with both 2% dose difference pass rates of 99% (10% dose threshold). Compared to the CBCTorg-to-pCT registration, the majority of mean absolute differences of rigid transformation parameters were less than 0.20 mm/0.20°. For bladder and rectum, the DSC were 0.88 and 0.77 for CBCTLD_GAN and 0.92 and 0.87 for CBCTLD_ResGAN compared to CBCTorg, and HDavg were 1.34 mm and 1.93 mm for CBCTLD_GAN, and 0.90 mm and 1.05 mm for CBCTLD_ResGAN. The computational time was ∼2 s per patient. This study investigated the feasibility of adapting two cycleGAN models to simultaneously remove under-sampling artifacts and correct image intensities of 25% dose CBCT images. High accuracy on dose calculation, HU and patient alignment were achieved. CBCTLD_ResGAN achieved better anatomical fidelity

    DEEP LEARNING IN COMPUTER-ASSISTED MAXILLOFACIAL SURGERY

    Get PDF

    Artificial Intelligence in Radiation Therapy

    Get PDF
    Artificial intelligence (AI) has great potential to transform the clinical workflow of radiotherapy. Since the introduction of deep neural networks, many AI-based methods have been proposed to address challenges in different aspects of radiotherapy. Commercial vendors have started to release AI-based tools that can be readily integrated to the established clinical workflow. To show the recent progress in AI-aided radiotherapy, we have reviewed AI-based studies in five major aspects of radiotherapy including image reconstruction, image registration, image segmentation, image synthesis, and automatic treatment planning. In each section, we summarized and categorized the recently published methods, followed by a discussion of the challenges, concerns, and future development. Given the rapid development of AI-aided radiotherapy, the efficiency and effectiveness of radiotherapy in the future could be substantially improved through intelligent automation of various aspects of radiotherapy
    • …
    corecore