3,035 research outputs found

    IMAGE-BASED RESPIRATORY MOTION EXTRACTION AND RESPIRATION-CORRELATED CONE BEAM CT (4D-CBCT) RECONSTRUCTION

    Get PDF
    Accounting for respiration motion during imaging helps improve targeting precision in radiation therapy. Respiratory motion can be a major source of error in determining the position of thoracic and upper abdominal tumor targets during radiotherapy. Thus, extracting respiratory motion is a key task in radiation therapy planning. Respiration-correlated or four-dimensional CT (4DCT) imaging techniques have been recently integrated into imaging systems for verifying tumor position during treatment and managing respiration-induced tissue motion. The quality of the 4D reconstructed volumes is highly affected by the respiratory signal extracted and the phase sorting method used. This thesis is divided into two parts. In the first part, two image-based respiratory signal extraction methods are proposed and evaluated. Those methods are able to extract the respiratory signals from CBCT images without using external sources, implanted markers or even dependence on any structure in the images such as the diaphragm. The first method, called Local Intensity Feature Tracking (LIFT), extracts the respiratory signal depending on feature points extracted and tracked through the sequence of projections. The second method, called Intensity Flow Dimensionality Reduction (IFDR), detects the respiration signal by computing the optical flow motion of every pixel in each pair of adjacent projections. Then, the motion variance in the optical flow dataset is extracted using linear and non-linear dimensionality reduction techniques to represent a respiratory signal. Experiments conducted on clinical datasets showed that the respiratory signal was successfully extracted using both proposed methods and it correlates well with standard respiratory signals such as diaphragm position and the internal markers’ signal. In the second part of this thesis, 4D-CBCT reconstruction based on different phase sorting techniques is studied. The quality of the 4D reconstructed images is evaluated and compared for different phase sorting methods such as internal markers, external markers and image-based methods (LIFT and IFDR). Also, a method for generating additional projections to be used in 4D-CBCT reconstruction is proposed to reduce the artifacts that result when reconstructing from an insufficient number of projections. Experimental results showed that the feasibility of the proposed method in recovering the edges and reducing the streak artifacts

    A biomechanical approach for real-time tracking of lung tumors during External Beam Radiation Therapy (EBRT)

    Get PDF
    Lung cancer is the most common cause of cancer related death in both men and women. Radiation therapy is widely used for lung cancer treatment. However, this method can be challenging due to respiratory motion. Motion modeling is a popular method for respiratory motion compensation, while biomechanics-based motion models are believed to be more robust and accurate as they are based on the physics of motion. In this study, we aim to develop a biomechanics-based lung tumor tracking algorithm which can be used during External Beam Radiation Therapy (EBRT). An accelerated lung biomechanical model can be used during EBRT only if its boundary conditions (BCs) are defined in a way that they can be updated in real-time. As such, we have developed a lung finite element (FE) model in conjunction with a Neural Networks (NNs) based method for predicting the BCs of the lung model from chest surface motion data. To develop the lung FE model for tumor motion prediction, thoracic 4D CT images of lung cancer patients were processed to capture the lung and diaphragm geometry, trans-pulmonary pressure, and diaphragm motion. Next, the chest surface motion was obtained through tracking the motion of the ribcage in 4D CT images. This was performed to simulate surface motion data that can be acquired using optical tracking systems. Finally, two feedforward NNs were developed, one for estimating the trans-pulmonary pressure and another for estimating the diaphragm motion from chest surface motion data. The algorithm development consists of four steps of: 1) Automatic segmentation of the lungs and diaphragm, 2) diaphragm motion modelling using Principal Component Analysis (PCA), 3) Developing the lung FE model, and 4) Using two NNs to estimate the trans-pulmonary pressure values and diaphragm motion from chest surface motion data. The results indicate that the Dice similarity coefficient between actual and simulated tumor volumes ranges from 0.76±0.04 to 0.91±0.01, which is favorable. As such, real-time lung tumor tracking during EBRT using the proposed algorithm is feasible. Hence, further clinical studies involving lung cancer patients to assess the algorithm performance are justified

    Assessing and Improving 4D-CT Imaging for Radiotherapy Applications

    Get PDF
    Lung cancer has both a high incidence and death rate. A contributing factor to these high rates comes from the difficulty of treating lung cancers due to the inherent mobility of the lung tissue and the tumour. 4D-CT imaging has been developed to image lung tumours as they move during respiration. Most 4D-CT imaging methods rely on data from an external respiratory surrogate to sort the images according to respiratory phase. However, it has been shown that respiratory surrogate 4D-CT methods can suffer from imaging artifacts that degrade the image quality of the 4D-CT volumes that are used to plan a patient\u27s radiation therapy. In Chapter 2 of this thesis a method to investigate the correlation between an external respiratory surrogate and the internal anatomy was developed. The studies were performed on ventilated pigs with an induced inconsistent amplitude of breathing. The effect of inconsistent breathing on the correlation between the external marker and the internal anatomy was tested using a linear regression. It was found in 10 of the 12 studies performed that there were significant changes in the slope of the regression line as a result of inconsistent breathing. From this study we conclude that the relationship between an external marker and the internal anatomy is not stable and can be perturbed by inconsistent breathing amplitudes. Chapter 3 describes the development of a image based 4D-CT imaging algorithm based on the concept of normalized cross correlation (NCC) between images. The volumes produced by the image based algorithm were compared to volumes produced using a clinical external marker 4D-CT algorithm. The image based method produced 4D-CT volumes that had a reduced number of imaging artifacts when compared to the external marker produced volumes. It was shown that an image based 4D-CT method could be developed and perform as well or better than external marker methods that are currently in clinical use. In Chapter 4 a method was developed to assess the uncertainties of the locations of anatomical structures in the volumes produced by the image based 4D-CT algorithm developed in Chapter 3. The uncertainties introduced by using NCC to match a pair of images according to respiratory phase were modeled and experimentally determined. Additionally, the assumption that two subvolumes could be matched in respiratory phase using a single pair of 2D overlapping images was experimentally validated. It was shown that when the image based 4D-CT algorithm developed in Chapter 3 was applied to data acquired from a ventilated pig with induced inconsistent breathing the displacement uncertainties were on the order of 1.0 millimeter. The results of this thesis show that there exists the possibility of a miscorrelation between the motion of a respiratory surrogate (marker) and the internal anatomy under inconsistent breathing amplitude. Additionally, it was shown that an image based 4D-CT method that operates without the need of one or more external respiratory surrogate(s) could produce artifact free volumes synchronous with respiratory phase. The spatial uncertainties of the volumes produced by the image based 4D-CT method were quantified and shown to be small (~ 1mm) which is an acceptable accuracy for radiation treatment planning. The elimination of the external respiratory surrogates simplifies the implementation and increases the throughput of the image based 4D-CT method as well

    integration of enhanced optical tracking techniques and imaging in igrt

    Get PDF
    Patient setup/Optical tracking/IGRT/Treatment surveillance. In external beam radiotherapy, modern technologies for dynamic dose delivery and beam conformation provide high selectivity in radiation dose administration to the pathological volume. A comparable accuracy level is needed in the 3-D localization of tumor and organs at risk (OARs), in order to accomplish the planned dose distribution in the reality of each irradiation session. In-room imaging techniques for patient setup verification and tumor targeting may benefit of the combined daily use of optical tracking technologies, supported by techniques for the detection and compensation of organ motion events. Multiple solutions to enhance the use of optical tracking for the on-line correction of target localization uncertainties are described, with specific emphasis on the compensation of setup errors, breathing movements and non-rigid deformations. The final goal is the implementation of customized protocols where appropriate external landmarks, to be tracked in real-time by means of noninvasive optical devices, are selected as a function of inner target localization. The presented methodology features high accuracy in patient setup optimization, also providing a valuable tool for on-line patient surveillance, taking into account both breathing and deformation effects. The methodic application of optical tracking is put forward to represent a reliable and low cost procedure for the reduction of safety margins, once the patient-specific correlation between external landmarks and inner structures has been established. Therefore, the integration of optical tracking with in-room imaging devices is proposed as a way to gain higher confidence in the framework of Image Guided Radiation Therapy (IGRT) treatments

    Investigation Of The Microsoft Kinect V2 Sensor As A Multi-Purpose Device For A Radiation Oncology Clinic

    Get PDF
    For a radiation oncology clinic, the number of devices available to assist in the workflow for radiotherapy treatments are quite numerous. Processes such as patient verification, motion management, or respiratory motion tracking can all be improved upon by devices currently on the market. These three specific processes can directly impact patient safety and treatment efficacy and, as such, are important to track and quantify. Most products available will only provide a solution for one of these processes and may be outside the reach of a typical radiation oncology clinic due to difficult implementation and incorporation with already existing hardware. This manuscript investigates the use of the Microsoft Kinect v2 sensor to provide solutions for all three processes all while maintaining a relatively simple and easy to use implementation. To assist with patient verification, the Kinect system was programmed to create a facial recognition and recall process. The basis of the facial recognition algorithm was created by utilizing a facial mapping library distributed by Microsoft within the Software Developers Toolkit (SDK). Here, the system extracts 31 fiducial points representing various facial landmarks. 3D vectors are created between each of the 31 points and the magnitude of each vector is calculated by the system. This allows for a face to be defined as a collection of 465 specific vector magnitudes. The 465 vector magnitudes defining a face are then used in both the creation of a facial reference data set and subsequent evaluations of real-time sensor data in the matching algorithm. To test the algorithm, a database of 39 faces was created, each with 465 vectors derived from the fiducial points, and a one-to-one matching procedure was performed to obtain sensitivity and specificity data of the facial identification system. In total, 5299 trials were performed and threshold parameters were created for match determination. Optimization of said parameters in the matching algorithm by way of ROC curves indicated the sensitivity of the system for was 96.5% and the specificity was 96.7%. These results indicate a fairly robust methodology for verifying, in real-time, a specific face through comparison from a pre-collected reference data set. In its current implementation, the process of data collection for each face and subsequent matching session averaged approximately 30 seconds, which may be too onerous to provide a realistic supplement to patient identification in a clinical setting. Despite the time commitment, the data collection process was well tolerated by all participants. It was found that ambient light played a crucial role in the accuracy and reproducibility of the facial recognition system. Testing with various light levels found that ambient light greater than 200 lux produced the most accurate results. As such, the acquisition process should be setup in such a way to ensure consistent ambient light conditions across both the reference recording session and subsequent real-time identification sessions. In developing a motion management process with the Kinect, two separate, but complimentary processes were created. First, to track large scale anatomical movements, the automatic skeletal tracking capabilities of the Kinect were utilized. 25 specific body joints (head, elbow, knee, etc) make up the skeletal frame and are locked to relative positions on the body. Using code written in C#, these joints are tracked, in 3D space, and compared to an initial state of the patient allowing for an indication of anatomical motion. Additionally, to track smaller, more subtle movements on a specific area of the body, a user drawn ROI can be created. Here, the depth values of all pixels associated with the body in the ROI are compared to the initial state. The system counts the number of live pixels with a depth difference greater than a specified threshold compared to the initial state and the area of each of those pixels is calculated based on their depth. The percentage of area moved (PAM) compared to the ROI area then becomes an indication of gross movement within the ROI. In this study, 9 specific joints proved to be stable during data acquisition. When moved in orthogonal directions, each coordinate recorded had a relatively linear trend of movement but not the expected 1:1 relationship to couch movement. Instead, calculation of the vector magnitude between the initial and current position proved a better indicator of movement. 5 of the 9 joints (Left/Right Elbow, Left/Right Hip, and Spine-Base) showed relatively consistent values for radial movements of 5mm and 10mm, achieving 20% - 25% coefficient of variation. For these 5 joints, this allowed for threshold values for calculated radial distances of 3mm and 7.5 mm to be set for 5mm and 10mm of actual movement, respectively. When monitoring a drawn ROI, it was found that the depth sensor had very little sensitivity of movement in the X (Left/Right) or Y (Superior/Inferior) direction, but exceptional sensitivity in the Z (Anterior/Posterior) direction. As such, the PAM values could only be coordinated with motion in the Z direction. PAM values of over 60% were shown to be indicative of movement in the Z direction equal to that of the threshold value set for movement as small as 3mm. Lastly, the Kinect was utilized to create a marker-less, respiratory motion tracking system. Code was written to access the Kinect’s depth sensor and create a process to track the respiratory motion of a subject by recording the depth (distance) values obtained at several user selected points on the subject, with each point representing one pixel on the depth image. As a patient breathes, a specific anatomical point on the chest/abdomen will move slightly within the depth image across a number of pixels. By tracking how depth values change for a specific pixel, instead of how the anatomical point moves throughout the image, a respiratory trace can be obtained based on changing depth values of the selected pixel. Tracking of these values can then be implemented via marker-less setup. Varian’s RPM system and the Anzai belt system were used in tandem with the Kinect in order to compare respiratory traces obtained by each using two different subjects. Analysis of the depth information from the Kinect for purposes of phase based and amplitude based binning proved to be correlated well with the RPM and Anzai systems. IQR values were obtained which compared times correlated with specific amplitude and phase percentage values against each product. The IQR spans of time indicated the Kinect would measure a specific percentage value within 0.077 s for Subject 1 and 0.164s for Subject 2 when compared to values obtained with RPM or Anzai. For 4D-CT scans, these times correlate to less than 1mm of couch movement and would create an offset of one half an acquired slice. These minimal deviations between the traces created by the Kinect and RPM or Anzai indicate that by tracking the depth values of user selected pixels within the depth image, rather than tracking specific anatomical locations, respiratory motion can be tracked and visualized utilizing the Kinect with results comparable to that of commercially available products

    Evaluating and Improving 4D-CT Image Segmentation for Lung Cancer Radiotherapy

    Get PDF
    Lung cancer is a high-incidence disease with low survival despite surgical advances and concurrent chemo-radiotherapy strategies. Image-guided radiotherapy provides for treatment measures, however, significant challenges exist for imaging, treatment planning, and delivery of radiation due to the influence of respiratory motion. 4D-CT imaging is capable of improving image quality of thoracic target volumes influenced by respiratory motion. 4D-CT-based treatment planning strategies requires highly accurate anatomical segmentation of tumour volumes for radiotherapy treatment plan optimization. Variable segmentation of tumour volumes significantly contributes to uncertainty in radiotherapy planning due to a lack of knowledge regarding the exact shape of the lesion and difficulty in quantifying variability. As image-segmentation is one of the earliest tasks in the radiotherapy process, inherent geometric uncertainties affect subsequent stages, potentially jeopardizing patient outcomes. Thus, this work assesses and suggests strategies for mitigation of segmentation-related geometric uncertainties in 4D-CT-based lung cancer radiotherapy at pre- and post-treatment planning stages

    Incorporating Cardiac Substructures Into Radiation Therapy For Improved Cardiac Sparing

    Get PDF
    Growing evidence suggests that radiation therapy (RT) doses to the heart and cardiac substructures (CS) are strongly linked to cardiac toxicities, though only the heart is considered clinically. This work aimed to utilize the superior soft-tissue contrast of magnetic resonance (MR) to segment CS, quantify uncertainties in their position, assess their effect on treatment planning and an MR-guided environment. Automatic substructure segmentation of 12 CS was completed using a novel hybrid MR/computed tomography (CT) atlas method and was improved upon using a 3-dimensional neural network (U-Net) from deep learning. Intra-fraction motion due to respiration was then quantified. The inter-fraction setup uncertainties utilizing a novel MR-linear accelerator were also quantified. Treatment planning comparisons were performed with and without substructure inclusions and methods to reduce radiation dose to sensitive CS were evaluated. Lastly, these described technologies (deep learning U-Net) were translated to an MR-linear accelerator and a segmentation pipeline was created. Automatic segmentations from the hybrid MR/CT atlas was able to generate accurate segmentations for the chambers and great vessels (Dice similarity coefficient (DSC) \u3e 0.75) but coronary artery segmentations were unsuccessful (DSC\u3c0.3). After implementing deep learning, DSC for the chambers and great vessels was ≥0.85 along with an improvement in the coronary arteries (DSC\u3e0.5). Similar accuracy was achieved when implementing deep learning for MR-guided RT. On average, automatic segmentations required ~10 minutes to generate per patient and deep learning only required 14 seconds. The inclusion of CS in the treatment planning process did not yield statistically significant changes in plan complexity, PTV, or OAR dose. Automatic segmentation results from deep learning pose major efficiency and accuracy gains for CS segmentation offering high potential for rapid implementation into radiation therapy planning for improved cardiac sparing. Introducing CS into RT planning for MR-guided RT presented an opportunity for more effective sparing with limited increase in plan complexity

    Incorporating Cardiac Substructures Into Radiation Therapy For Improved Cardiac Sparing

    Get PDF
    Growing evidence suggests that radiation therapy (RT) doses to the heart and cardiac substructures (CS) are strongly linked to cardiac toxicities, though only the heart is considered clinically. This work aimed to utilize the superior soft-tissue contrast of magnetic resonance (MR) to segment CS, quantify uncertainties in their position, assess their effect on treatment planning and an MR-guided environment. Automatic substructure segmentation of 12 CS was completed using a novel hybrid MR/computed tomography (CT) atlas method and was improved upon using a 3-dimensional neural network (U-Net) from deep learning. Intra-fraction motion due to respiration was then quantified. The inter-fraction setup uncertainties utilizing a novel MR-linear accelerator were also quantified. Treatment planning comparisons were performed with and without substructure inclusions and methods to reduce radiation dose to sensitive CS were evaluated. Lastly, these described technologies (deep learning U-Net) were translated to an MR-linear accelerator and a segmentation pipeline was created. Automatic segmentations from the hybrid MR/CT atlas was able to generate accurate segmentations for the chambers and great vessels (Dice similarity coefficient (DSC) \u3e 0.75) but coronary artery segmentations were unsuccessful (DSC\u3c0.3). After implementing deep learning, DSC for the chambers and great vessels was ≥0.85 along with an improvement in the coronary arteries (DSC\u3e0.5). Similar accuracy was achieved when implementing deep learning for MR-guided RT. On average, automatic segmentations required ~10 minutes to generate per patient and deep learning only required 14 seconds. The inclusion of CS in the treatment planning process did not yield statistically significant changes in plan complexity, PTV, or OAR dose. Automatic segmentation results from deep learning pose major efficiency and accuracy gains for CS segmentation offering high potential for rapid implementation into radiation therapy planning for improved cardiac sparing. Introducing CS into RT planning for MR-guided RT presented an opportunity for more effective sparing with limited increase in plan complexity
    • …
    corecore