3,357 research outputs found
"Dose of the day" based on cone beam computed tomography and deformable image registration for lung cancer radiotherapy.
PURPOSE:Adaptive radiotherapy (ART) has potential to reduce toxicity and facilitate safe dose escalation. Dose calculations with the planning CT deformed to cone beam CT (CBCT) have shown promise for estimating the "dose of the day". The purpose of this study is to investigate the "dose of the day" calculation accuracy based on CBCT and deformable image registration (DIR) for lung cancer radiotherapy. METHODS:A total of 12 lung cancer patients were identified, for which daily CBCT imaging was performed for treatment positioning. A re-planning CT (rCT) was acquired after 20 Gy for all patients. A virtual CT (vCT) was created by deforming initial planning CT (pCT) to the simulated CBCT that was generated from deforming CBCT to rCT acquired on the same day. Treatment beams from the initial plan were copied to the vCT and rCT for dose calculation. Dosimetric agreement between vCT-based and rCT-based accumulated doses was evaluated using the Bland-Altman analysis. RESULTS:Mean differences in dose-volume metrics between vCT and rCT were smaller than 1.5%, and most discrepancies fell within the range of ± 5% for the target volume, lung, esophagus, and heart. For spinal cord Dmax , a large mean difference of -5.55% was observed, which was largely attributed to very limited CBCT image quality (e.g., truncation artifacts). CONCLUSION:This study demonstrated a reasonable agreement in dose-volume metrics between dose accumulation based on vCT and rCT, with the exception for cases with poor CBCT image quality. These findings suggest potential utility of vCT for providing a reasonable estimate of the "dose of the day", and thus facilitating the process of ART for lung cancer
Artificial Intelligence-based Motion Tracking in Cancer Radiotherapy: A Review
Radiotherapy aims to deliver a prescribed dose to the tumor while sparing
neighboring organs at risk (OARs). Increasingly complex treatment techniques
such as volumetric modulated arc therapy (VMAT), stereotactic radiosurgery
(SRS), stereotactic body radiotherapy (SBRT), and proton therapy have been
developed to deliver doses more precisely to the target. While such
technologies have improved dose delivery, the implementation of intra-fraction
motion management to verify tumor position at the time of treatment has become
increasingly relevant. Recently, artificial intelligence (AI) has demonstrated
great potential for real-time tracking of tumors during treatment. However,
AI-based motion management faces several challenges including bias in training
data, poor transparency, difficult data collection, complex workflows and
quality assurance, and limited sample sizes. This review serves to present the
AI algorithms used for chest, abdomen, and pelvic tumor motion
management/tracking for radiotherapy and provide a literature summary on the
topic. We will also discuss the limitations of these algorithms and propose
potential improvements.Comment: 36 pages, 5 Figures, 4 Table
Optimization of Decision Making in Personalized Radiation Therapy using Deformable Image Registration
Cancer has become one of the dominant diseases worldwide, especially in western countries, and radiation therapy is one of the primary treatment options for 50% of all patients diagnosed. Radiation therapy involves the radiation delivery and planning based on radiobiological models derived primarily from clinical trials. Since 2015 improvements in information technologies and data storage allowed new models to be created using the large volumes of treatment data already available and correlate the actually delivered treatment with outcomes. The goals of this thesis are to 1) construct models of patient outcomes after receiving radiation therapy using available treatment and patient parameters and 2) provide a method to determine real accumulated radiation dose including the impact of registration uncertainties.
In Chapter 2, a model was developed predicting overall survival for patients with hepatocellular carcinoma or liver metastasis receiving radiation therapy. These models show which patients benefit from curative radiation therapy based on liver function, and the survival benefit of increased radiation dose on survival.
In Chapter 3, a method was developed to routinely evaluate deformable image registration (DIR) with computer-generated landmark pairs using the scale-invariant feature transform. The method presented in this chapter created landmark sets for comparing lung 4DCT images and provided the same evaluation of DIR as manual landmark sets.
In Chapter 4, an investigation was performed on the impact of DIR error on dose accumulation using landmarked 4DCT images as the ground truth. The study demonstrated the relationship between dose gradient, DIR error and dose accumulation error, and presented a method to determine error bars on the dose accumulation process.
In Chapter 5, a method was presented to determine quantitatively when to update a treatment plan during the course of a multi-fraction radiation treatment of head and neck cancer. This method investigated the ability to use only the planned dose with deformable image registration to predict dose changes caused by anatomical deformations.
This thesis presents the fundamental elements of a decision support system including patient pre-treatment parameters and the actual delivered dose using DIR while considering registration uncertainties
Quantitative Analysis of Radiation-Associated Parenchymal Lung Change
Radiation-induced lung damage (RILD) is a common consequence of thoracic radiotherapy (RT). We present here a novel classification of the parenchymal features of RILD. We developed a deep learning algorithm (DLA) to automate the delineation of 5 classes of parenchymal texture of increasing density.
200 scans were used to train and validate the network and the remaining 30 scans were used as a hold-out test set. The DLA automatically labelled the data with Dice Scores of 0.98, 0.43, 0.26, 0.47 and 0.92 for the 5 respective classes.
Qualitative evaluation showed that the automated labels were acceptable in over 80% of cases for all tissue classes, and achieved similar ratings to the manual labels. Lung registration was performed and the effect of radiation dose on each tissue class and correlation with respiratory outcomes was assessed. The change in volume of each tissue class over time generated by manual and automated segmentation was calculated. The 5 parenchymal classes showed distinct temporal patterns
We quantified the volumetric change in textures after radiotherapy and correlate these with radiotherapy dose and respiratory outcomes.
The effect of local dose on tissue class revealed a strong dose-dependent relationship
We have developed a novel classification of parenchymal changes associated with RILD that show a convincing dose relationship. The tissue classes are related to both global and local dose metrics, and have a distinct evolution over time. Although less strong, there is a relationship between the radiological texture changes we can measure and respiratory outcomes, particularly the MRC score which directly represents a patientâs functional status. We have demonstrated the potential of using our approach to analyse and understand the morphological and functional evolution of RILD in greater detail than previously possible
Deep-Learning-based Fast and Accurate 3D CT Deformable Image Registration in Lung Cancer
Purpose: In some proton therapy facilities, patient alignment relies on two
2D orthogonal kV images, taken at fixed, oblique angles, as no 3D on-the-bed
imaging is available. The visibility of the tumor in kV images is limited since
the patient's 3D anatomy is projected onto a 2D plane, especially when the
tumor is behind high-density structures such as bones. This can lead to large
patient setup errors. A solution is to reconstruct the 3D CT image from the kV
images obtained at the treatment isocenter in the treatment position.
Methods: An asymmetric autoencoder-like network built with vision-transformer
blocks was developed. The data was collected from 1 head and neck patient: 2
orthogonal kV images (1024x1024 voxels), 1 3D CT with padding (512x512x512)
acquired from the in-room CT-on-rails before kVs were taken and 2
digitally-reconstructed-radiograph (DRR) images (512x512) based on the CT. We
resampled kV images every 8 voxels and DRR and CT every 4 voxels, thus formed a
dataset consisting of 262,144 samples, in which the images have a dimension of
128 for each direction. In training, both kV and DRR images were utilized, and
the encoder was encouraged to learn the jointed feature map from both kV and
DRR images. In testing, only independent kV images were used. The full-size
synthetic CT (sCT) was achieved by concatenating the sCTs generated by the
model according to their spatial information. The image quality of the
synthetic CT (sCT) was evaluated using mean absolute error (MAE) and
per-voxel-absolute-CT-number-difference volume histogram (CDVH).
Results: The model achieved a speed of 2.1s and a MAE of <40HU. The CDVH
showed that <5% of the voxels had a per-voxel-absolute-CT-number-difference
larger than 185 HU.
Conclusion: A patient-specific vision-transformer-based network was developed
and shown to be accurate and efficient to reconstruct 3D CT images from kV
images.Comment: 9 figure
Integration of Spatial Distortion Effects in a 4D Computational Phantom for Simulation Studies in Extra-Cranial MRI-guided Radiation Therapy: Initial Results.
PurposeSpatial distortions in magnetic resonance imaging (MRI) are mainly caused by inhomogeneities of the static magnetic field, nonlinearities in the applied gradients, and tissueâspecific magnetic susceptibility variations. These factors may significantly alter the geometrical accuracy of the reconstructed MR image, thus questioning the reliability of MRI for guidance in imageâguided radiation therapy. In this work, we quantified MRI spatial distortions and created a quantitative model where different sources of distortions can be separated. The generated model was then integrated into a fourâdimensional (4D) computational phantom for simulation studies in MRIâguided radiation therapy at extraâcranial sites.MethodsA geometrical spatial distortion phantom was designed in four modules embedding laserâcut PMMA grids, providing 3520 landmarks in a field of view of (345 Ă 260 Ă 480) mm3. The construction accuracy of the phantom was verified experimentally. Two fast MRI sequences for extraâcranial imaging at 1.5 T were investigated, considering axial slices acquired with online distortion correction, in order to mimic practical use in MRIâguided radiotherapy. Distortions were separated into their sources by acquisition of images with gradient polarity reversal and dedicated susceptibility calculations. Such a separation yielded a quantitative spatial distortion model to be used for MR imaging simulations. Finally, the obtained spatial distortion model was embedded into an anthropomorphic 4D computational phantom, providing registered virtual CT/MR images where spatial distortions in MRI acquisition can be simulated.ResultsThe manufacturing accuracy of the geometrical distortion phantom was quantified to be within 0.2 mm in the grid planes and 0.5 mm in depth, including thickness variations and bending effects of individual grids. Residual spatial distortions after MRI distortion correction were strongly influenced by the applied correction mode, with larger effects in the transâaxial direction. In the axial plane, gradient nonlinearities caused the main distortions, with values up to 3 mm in a 1.5 T magnet, whereas static field and susceptibility effects were below 1 mm. The integration in the 4D anthropomorphic computational phantom highlighted that deformations can be severe in the region of the thoracic diaphragm, especially when using axial imaging with 2D distortion correction. Adaptation of the phantom based on patientâspecific measurements was also verified, aiming at increased realism in the simulation.ConclusionsThe implemented framework provides an integrated approach for MRI spatial distortion modeling, where different sources of distortion can be quantified in timeâdependent geometries. The computational phantom represents a valuable platform to study motion management strategies in extraâcranial MRIâguided radiotherapy, where the effects of spatial distortions can be modeled on synthetic images in a virtual environment
- âŠ