3,262 research outputs found

    Image Registration to Map Endoscopic Video to Computed Tomography for Head and Neck Radiotherapy Patients

    Get PDF
    The purpose of this work was to explore the feasibility of registering endoscopic video to radiotherapy treatment plans for patients with head and neck cancer without physical tracking of the endoscope during the examination. Endoscopy-CT registration would provide a clinical tool that could be used to enhance the treatment planning process and would allow for new methods to study the incidence of radiation-related toxicity. Endoscopic video frames were registered to CT by optimizing virtual endoscope placement to maximize the similarity between the frame and the virtual image. Virtual endoscopic images were rendered using a polygonal mesh created by segmenting the airways of the head and neck with a density threshold. The optical properties of the virtual endoscope were matched to a calibrated model of the real endoscope. A novel registration algorithm was developed that takes advantage of physical constraints on the endoscope to effectively search the airways of the head and neck for the desired virtual endoscope coordinates. This algorithm was tested on rigid phantoms with embedded point markers and protruding bolus material. In these tests, the median registration accuracy was 3.0 mm for point measurements and 3.5 mm for surface measurements. The algorithm was also tested on four endoscopic examinations of three patients, in which it achieved a median registration accuracy of 9.9 mm. The uncertainties caused by the non-rigid anatomy of the head and neck and differences in patient positioning between endoscopic examinations and CT scans were examined by taking repeated measurements after placing the virtual endoscope in surface meshes created from different CT scans. Non-rigid anatomy introduced errors on the order of 1-3 mm. Patient positioning had a larger impact, introducing errors on the order of 3.5-4.5 mm. Endoscopy-CT registration in the head and neck is possible, but large registration errors were found in patients. The uncertainty analyses suggest a lower limit of 3-5 mm. Further development is required to achieve an accuracy suitable for clinical use

    Cohort-based T-SSIM Visual Computing for Radiation Therapy Prediction and Exploration

    Full text link
    We describe a visual computing approach to radiation therapy (RT) planning, based on spatial similarity within a patient cohort. In radiotherapy for head and neck cancer treatment, dosage to organs at risk surrounding a tumor is a large cause of treatment toxicity. Along with the availability of patient repositories, this situation has lead to clinician interest in understanding and predicting RT outcomes based on previously treated similar patients. To enable this type of analysis, we introduce a novel topology-based spatial similarity measure, T-SSIM, and a predictive algorithm based on this similarity measure. We couple the algorithm with a visual steering interface that intertwines visual encodings for the spatial data and statistical results, including a novel parallel-marker encoding that is spatially aware. We report quantitative results on a cohort of 165 patients, as well as a qualitative evaluation with domain experts in radiation oncology, data management, biostatistics, and medical imaging, who are collaborating remotely.Comment: IEEE VIS (SciVis) 201

    Zero-shot Medical Image Translation via Frequency-Guided Diffusion Models

    Full text link
    Recently, the diffusion model has emerged as a superior generative model that can produce high quality and realistic images. However, for medical image translation, the existing diffusion models are deficient in accurately retaining structural information since the structure details of source domain images are lost during the forward diffusion process and cannot be fully recovered through learned reverse diffusion, while the integrity of anatomical structures is extremely important in medical images. For instance, errors in image translation may distort, shift, or even remove structures and tumors, leading to incorrect diagnosis and inadequate treatments. Training and conditioning diffusion models using paired source and target images with matching anatomy can help. However, such paired data are very difficult and costly to obtain, and may also reduce the robustness of the developed model to out-of-distribution testing data. We propose a frequency-guided diffusion model (FGDM) that employs frequency-domain filters to guide the diffusion model for structure-preserving image translation. Based on its design, FGDM allows zero-shot learning, as it can be trained solely on the data from the target domain, and used directly for source-to-target domain translation without any exposure to the source-domain data during training. We evaluated it on three cone-beam CT (CBCT)-to-CT translation tasks for different anatomical sites, and a cross-institutional MR imaging translation task. FGDM outperformed the state-of-the-art methods (GAN-based, VAE-based, and diffusion-based) in metrics of Frechet Inception Distance (FID), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Index Measure (SSIM), showing its significant advantages in zero-shot medical image translation

    Deep learning based synthetic CT from cone beam CT generation for abdominal paediatric radiotherapy

    Get PDF
    Objective: Adaptive radiotherapy workflows require images with the quality of computed tomography (CT) for re-calculation and re-optimisation of radiation doses. In this work we aim to improve quality of cone beam CT (CBCT) images for dose calculation using deep learning. / Approach: We propose a novel framework for CBCT-to-CT synthesis using cycle-consistent Generative 10 Adversarial Networks (cycleGANs). The framework was tailored for paediatric abdominal patients, a challenging application due to the inter-fractional variability in bowel filling and smaller patient numbers. We introduced the concept of global residuals only learning to the networks and modified the cycleGAN loss function to explicitly promote structural consistency between source and synthetic images. Finally, to compensate for the anatomical variability and address the difficulties in collecting large datasets in the 15 paediatric population, we applied a smart 2D slice selection based on the common field-of-view across the dataset (abdomen). This acted as a weakly paired data approach that allowed us to take advantage of scans from patients treated for a variety of malignancies (thoracic-abdominal-pelvic) for training purposes. We first optimised the proposed framework and benchmarked its performance on a development dataset. Later, a comprehensive quantitative evaluation was performed on an unseen 20 dataset, which included calculating global image similarity metrics, segmentation-based measures and proton therapy-specific metrics. / Main results: We found improved performance, compared to a baseline implementation, on imagesimilarity metrics such as Mean Absolute Error calculated for a matched virtual CT (55.0±16.6 proposed vs 58.9±16.8 baseline). There was also a higher level of structural agreement for gastrointestinal gas 25 between source and synthetic images measured through dice similarity overlap (0.872±0.053 proposed vs 0.846±0.052 baseline). Differences found in water-equivalent thickness metrics were also smaller for our method (3.3±2.4% proposed vs 3.7±2.8% baseline). / Significance: Our findings indicate that our innovations to the cycleGAN framework improved the quality and structure consistency of the synthetic CTs generated

    Feasibility of CycleGAN enhanced low dose CBCT imaging for prostate radiotherapy dose calculation

    Get PDF
    Daily cone beam computed tomography (CBCT) imaging during the course of fractionated radiotherapy treatment can enable online adaptive radiotherapy but also expose patients to a non-negligible amount of radiation dose. This work investigates the feasibility of low dose CBCT imaging capable of enabling accurate prostate radiotherapy dose calculation with only 25% projections by overcoming under-sampling artifacts and correcting CT numbers by employing cycle-consistent generative adversarial networks (cycleGAN). Uncorrected CBCTs of 41 prostate cancer patients, acquired with ∼350 projections (CBCTorg), were retrospectively under-sampled to 25% dose images (CBCTLD) with only ∼90 projections and reconstructed using Feldkamp–Davis–Kress. We adapted a cycleGAN including shape loss to translate CBCTLD into planning CT (pCT) equivalent images (CBCTLD_GAN). An alternative cycleGAN with a generator residual connection was implemented to improve anatomical fidelity (CBCTLD_ResGAN). Unpaired 4-fold cross-validation (33 patients) was performed to allow using the median of 4 models as output. Deformable image registration was used to generate virtual CTs (vCT) for Hounsfield units (HU) accuracy evaluation on 8 additional test patients. Volumetric modulated arc therapy plans were optimized on vCT, and recalculated on CBCTLD_GAN and CBCTLD_ResGAN to determine dose calculation accuracy. CBCTLD_GAN, CBCTLD_ResGAN and CBCTorg were registered to pCT and residual shifts were analyzed. Bladder and rectum were manually contoured on CBCTLD_GAN, CBCTLD_ResGAN and CBCTorg and compared in terms of Dice similarity coefficient (DSC), average and 95th percentile Hausdorff distance (HDavg, HD95). The mean absolute error decreased from 126 HU for CBCTLD to 55 HU for CBCTLD_GAN and 44 HU for CBCTLD_ResGAN. For PTV, the median differences of D98%, D50% and D2% comparing both CBCTLD_GAN to vCT were 0.3%, 0.3%, 0.3%, and comparing CBCTLD_ResGAN to vCT were 0.4%, 0.3% and 0.4%. Dose accuracy was high with both 2% dose difference pass rates of 99% (10% dose threshold). Compared to the CBCTorg-to-pCT registration, the majority of mean absolute differences of rigid transformation parameters were less than 0.20 mm/0.20°. For bladder and rectum, the DSC were 0.88 and 0.77 for CBCTLD_GAN and 0.92 and 0.87 for CBCTLD_ResGAN compared to CBCTorg, and HDavg were 1.34 mm and 1.93 mm for CBCTLD_GAN, and 0.90 mm and 1.05 mm for CBCTLD_ResGAN. The computational time was ∼2 s per patient. This study investigated the feasibility of adapting two cycleGAN models to simultaneously remove under-sampling artifacts and correct image intensities of 25% dose CBCT images. High accuracy on dose calculation, HU and patient alignment were achieved. CBCTLD_ResGAN achieved better anatomical fidelity

    Applications of a Biomechanical Patient Model for Adaptive Radiation Therapy

    Get PDF
    Biomechanical patient modeling incorporates physical knowledge of the human anatomy into the image processing that is required for tracking anatomical deformations during adaptive radiation therapy, especially particle therapy. In contrast to standard image registration, this enforces bio-fidelic image transformation. In this thesis, the potential of a kinematic skeleton model and soft tissue motion propagation are investigated for crucial image analysis steps in adaptive radiation therapy. The first application is the integration of the kinematic model in a deformable image registration process (KinematicDIR). For monomodal CT scan pairs, the median target registration error based on skeleton landmarks, is smaller than (1.6 ± 0.2) mm. In addition, the successful transferability of this concept to otherwise challenging multimodal registration between CT and CBCT as well as CT and MRI scan pairs is shown to result in median target registration error in the order of 2 mm. This meets the accuracy requirement for adaptive radiation therapy and is especially interesting for MR-guided approaches. Another aspect, emerging in radiotherapy, is the utilization of deep-learning-based organ segmentation. As radiotherapy-specific labeled data is scarce, the training of such methods relies heavily on augmentation techniques. In this work, the generation of synthetically but realistically deformed scans used as Bionic Augmentation in the training phase improved the predicted segmentations by up to 15% in the Dice similarity coefficient, depending on the training strategy. Finally, it is shown that the biomechanical model can be built-up from automatic segmentations without deterioration of the KinematicDIR application. This is essential for use in a clinical workflow

    Multiparametric Magnetic Resonance Imaging Artificial Intelligence Pipeline for Oropharyngeal Cancer Radiotherapy Treatment Guidance

    Get PDF
    Oropharyngeal cancer (OPC) is a widespread disease and one of the few domestic cancers that is rising in incidence. Radiographic images are crucial for assessment of OPC and aid in radiotherapy (RT) treatment. However, RT planning with conventional imaging approaches requires operator-dependent tumor segmentation, which is the primary source of treatment error. Further, OPC expresses differential tumor/node mid-RT response (rapid response) rates, resulting in significant differences between planned and delivered RT dose. Finally, clinical outcomes for OPC patients can also be variable, which warrants the investigation of prognostic models. Multiparametric MRI (mpMRI) techniques that incorporate simultaneous anatomical and functional information coupled to artificial intelligence (AI) approaches could improve clinical decision support for OPC by providing immediately actionable clinical rationale for adaptive RT planning. If tumors could be reproducibly segmented, rapid response could be classified, and prognosis could be reliably determined, overall patient outcomes would be optimized to improve the therapeutic index as a function of more risk-adapted RT volumes. Consequently, there is an unmet need for automated and reproducible imaging which can simultaneously segment tumors and provide predictive value for actionable RT adaptation. This dissertation primarily seeks to explore and optimize image processing, tumor segmentation, and patient outcomes in OPC through a combination of advanced imaging techniques and AI algorithms. In the first specific aim of this dissertation, we develop and evaluate mpMRI pre-processing techniques for use in downstream segmentation, response prediction, and outcome prediction pipelines. Various MRI intensity standardization and registration approaches were systematically compared and benchmarked. Moreover, synthetic image algorithms were developed to decrease MRI scan time in an effort to optimize our AI pipelines. We demonstrated that proper intensity standardization and image registration can improve mpMRI quality for use in AI algorithms, and developed a novel method to decrease mpMRI acquisition time. Subsequently, in the second specific aim of this dissertation, we investigated underlying questions regarding the implementation of RT-related auto-segmentation. Firstly, we quantified interobserver variability for an unprecedented large number of observers for various radiotherapy structures in several disease sites (with a particular emphasis on OPC) using a novel crowdsourcing platform. We then trained an AI algorithm on a series of extant matched mpMRI datasets to segment OPC primary tumors. Moreover, we validated and compared our best model\u27s performance to clinical expert observers. We demonstrated that AI-based mpMRI OPC tumor auto-segmentation offers decreased variability and comparable accuracy to clinical experts, and certain mpMRI input channel combinations could further improve performance. Finally, in the third specific aim of this dissertation, we predicted OPC primary tumor mid-therapy (rapid) treatment response and prognostic outcomes. Using co-registered pre-therapy and mid-therapy primary tumor manual segmentations of OPC patients, we generated and characterized treatment sensitive and treatment resistant pre-RT sub-volumes. These sub-volumes were used to train an AI algorithm to predict individual voxel-wise treatment resistance. Additionally, we developed an AI algorithm to predict OPC patient progression free survival using pre-therapy imaging from an international data science competition (ranking 1st place), and then translated these approaches to mpMRI data. We demonstrated AI models could be used to predict rapid response and prognostic outcomes using pre-therapy imaging, which could help guide treatment adaptation, though further work is needed. In summary, the completion of these aims facilitates the development of an image-guided fully automated OPC clinical decision support tool. The resultant deliverables from this project will positively impact patients by enabling optimized therapeutic interventions in OPC. Future work should consider investigating additional imaging timepoints, imaging modalities, uncertainty quantification, perceptual and ethical considerations, and prospective studies for eventual clinical implementation. A dynamic version of this dissertation is publicly available and assigned a digital object identifier through Figshare (doi: 10.6084/m9.figshare.22141871)
    • …
    corecore