25 research outputs found

    Towards real-time image-guided radiotherapy : 3D patient-specific breathing motion models driven by 2D cine-MRI

    No full text
    Treating mobile tumors such as lung or liver tumors with radiation ther- apy is challenging because the motion induced by respiration has to be considered. Until recently, there was no online and non-invasive imag- ing solution to follow the tumor motion during a treatment delivery. To ensure the target coverage, security margins are used to extend the irradiated area to encompass the motion, meaning healthy tissues are also hit which can induce undesired secondary effects. In the last few years, a new hybrid device combining Magnetic Resonance Imaging with a radiotherapy treatment unit has been developed. It allows to acquire images and deliver radiotherapy treatment simultaneously. Magnetic Resonance Imaging is the ideal imaging modality for this application because it gives a good soft tissue contrast, the orientation of the images can be chosen, and the image acquisition does not irradiate the patient. In such devices, 2D fast Magnetic Resonance images are well-suited to capture the real-time motion of the tumor. To be used effectively in an adaptive radiotherapy, these MR images have to be combined with 3D X-ray images such as CT, which are necessary to compute the radia- tion dose deposition. In this thesis, we developed a method combining both image modalities to track the motion on MR images and reproduce the tracked motion on a sequence of 3DCT images in real-time using a motion model. It can be used to drive a treatment delivery in real- time, follow breathing motion and detect irregularities in the breathing pattern.(FSA - Sciences de l'ingénieur) -- UCL, 202

    3DCT reconstruction from single X-ray projection

    No full text
    The treatment of abdominal or thorax tumor is challenging in particle therapy because the respiratory motion induces a movement of the tumor. One current option to follow the breathing motion is to regularly acquire one X-ray projection during the treatment, which does not give the full 3D anatomy. The purpose of this work is to reconstruct a 3DCT image based on a single X-ray projection using a neural network

    Towards Real-Time PBS dose accumulation observation using dynamic MRI

    No full text
    Context and objective With the arrival of hybrid Photon-MRI solutions in standard radiotherapy, more research are going towards real-time adaptive treatment or real-time treatment verification applications. In proton therapy, the hardware solution doesn’t exist yet but is under serious consideration. Proton treatments, with their particular dose distribution profile, should benefit even more than photons of such an hybrid device. In this context, we developed a tool to study the possible new treatment strategies that could emerge from such an hybrid machine and their possible gain. This particular work aims at visualizing a Pencil Beam Scanning (PBS) treatment plan on the continuous 2D MR images and compare a real-time recomputed range (RTR) and dose (RTD) based on those images with the planification range (PR) and dose (PD) computed on the planning CT. Method To compute the proton energy loss with a reasonable accuracy, CT images are still necessary to extract stopping power information. First, the MRI 2D continuous acquisition is launched and several interfaces are tracked using an home-made tool (diaphragm, skin, target if visible, 
). Based on the tracked positions and on the previously acquired 4DCT, a new CT image is interpolated or extrapolated using the deformation fields from the 4DCT phases to the midP phase. The same interfaces are then tracked on the newly created CT image to check if the motion matches well the real motion observed on the MRI. Finally, the RTR and RTD are recomputed on this new CT video frame, which imitates the real-time motion captured by the MRI better than the previously acquired 4DCT. To compute the RTR and RTD, an analytical method is used for now to achieve real-time performances. For the future, we are working on a real-time Monte Carlo dose calculation version. Results Early results, in which the real-time range is computed only for proton spots passing through the MRI slice, show a significative mean difference between the RTR and PR, with specific spot difference from -9 to 54 mm (Mean = 20 mm, Std = 11 mm). The mean range difference wasn’t affected much by delivery starting time for the studied patients. Conclusion By taking motion into account, the developed tool revealed significant deviations between the planned range and the recomputed range. Such tool would therefore be very helpful to assess the robustness of the treatment plan to motion effects and to simulate other treatment strategies. The real-time dose observation is for now only a visual tool and not yet used as a quality check for the delivery. The full DVH’s based on the RTD will be available for comparison in the next version of the tool

    Towards Real-Time proton range verification using dynamic MRI

    No full text
    Context and objective With the arrival of hybrid Photon-MRI solutions in standard radiotherapy, more research are going towards real-time adaptive treatment or real-time treatment verification applications. In proton therapy, the hardware solution doesn’t exist yet but is under serious consideration. Proton treatments, with their particular dose distribution profile, should benefit even more than photons of such an hybrid device. In this context, we developed a tool to study the possible new treatment strategies that could emerge from such an hybrid machine and their possible gain. This particular work aims at visualizing a PBS treatment plan on the continuous 2D MR images and compare a real-time recomputed range (RR) based on those images with the planification range (PR) computed on the average CT. Method To compute the proton energy loss, stopping power information need to be extracted from CT images in order to achieve a reasonable accuracy. First, the MRI continuous acquisition is launched and the diaphragm position is tracked using an home-made tool. Based on the tracked position, the correct 4DCT slices and 4DCT phase corresponding to the current MRI slice is selected automatically. Then, a non-rigid registration between the current MRI slice and the selected 4DCT slice is performed to create a virtual continuous CT video. Finally, the RR is recomputed on this video, which imitates the real-time motion captured by the MRI better than the 4DCT. Results and conclusion Early results show a significative mean difference between the RR and PR, with specific spot difference from -9 to 54 mm (Mean = 20 mm, Std = 11 mm). The mean difference wasn’t affected much by delivery starting time for the studied patient. By taking motion into account, the developed tool revealed significant deviations between the planned range and the recomputed range. Such tool would therefore be very helpful to assess the robustness of the treatment plan to motion effects

    Patient-specific three-dimensional image reconstruction from a single X-ray projection using a convolutional neural network for on-line radiotherapy applications

    No full text
    Background and purpose: Radiotherapy is commonly chosen to treat thoracic and abdominal cancers. However, irradiating mobile tumors accurately is extremely complex due to the organs’ breathing-related movements. Different methods have been studied and developed to treat mobile tumors properly. The combination of X-ray projection acquisition and implanted markers is used to locate the tumor in two dimensions (2D) but does not provide three-dimensional (3D) information. The aim of this work is to reconstruct a high-quality 3D computed tomography (3D-CT) image based on a single X-ray projection to locate the tumor in 3D without the need for implanted markers. Materials and Methods: Nine patients treated for a lung or liver cancer in radiotherapy were studied. For each patient, a data augmentation tool was used to create 500 new 3D-CT images from the planning four-dimensional computed tomography (4D-CT). For each 3D-CT, the corresponding digitally reconstructed radiograph was generated, and the 500 2D images were input into a convolutional neural network that then learned to reconstruct the 3D-CT. The dice score coefficient, normalized root mean squared error and difference between the ground-truth and the predicted 3D-CT images were computed and used as metrics. Results: Metrics’ averages across all patients were 85.5% and 96.2% for the gross target volume, 0.04 and 0.45 Hounsfield unit (HU), respectively. Conclusions: The proposed method allows reconstruction of a 3D-CT image from a single digitally reconstructed radiograph that could be used in real-time for better tumor localization and improved treatment of mobile tumors without the need for implanted markers

    Patient-specific three-dimensional image reconstruction from a single X-ray projection using a convolutional neural network for on-line radiotherapy applications

    No full text
    Background and purpose: Radiotherapy is commonly chosen to treat thoracic and abdominal cancers. However, irradiating mobile tumors accurately is extremely complex due to the organs’ breathing-related movements. Different methods have been studied and developed to treat mobile tumors properly. The combination of X-ray projection acquisition and implanted markers is used to locate the tumor in two dimensions (2D) but does not provide three-dimensional (3D) information. The aim of this work is to reconstruct a high-quality 3D computed tomography (3D-CT) image based on a single X-ray projection to locate the tumor in 3D without the need for implanted markers.Materials and Methods: Nine patients treated for a lung or liver cancer in radiotherapy were studied. For each patient, a data augmentation tool was used to create 500 new 3D-CT images from the planning four-dimensional computed tomography (4D-CT). For each 3D-CT, the corresponding digitally reconstructed radiograph was generated, and the 500 2D images were input into a convolutional neural network that then learned to reconstruct the 3D-CT. The dice score coefficient, normalized root mean squared error and difference between the ground-truth and the predicted 3D-CT images were computed and used as metrics.Results: Metrics’ averages across all patients were 85.5% and 96.2% for the gross target volume, 0.04 and 0.45 Hounsfield unit (HU), respectively.Conclusions: The proposed method allows reconstruction of a 3D-CT image from a single digitally reconstructed radiograph that could be used in real-time for better tumor localization and improved treatment of mobile tumors without the need for implanted markers

    Proton therapy PBS plan real-time simulation and visualisation tool

    No full text
    One of the possible next steps of improvement in proton therapy and radiotherapy in general is the use of real-time imaging during treatment. Recently, hybrid imaging-treatment solution have been developed for standard radiotherapy, such as the MR-Linac technology which was first used for online replanning. With such devices, we can expect in the future inter-fraction uses of continuous imaging or even intra-fraction real-time applications. A similar solution is not yet available for protons, mainly because the hardware challenges of building such a machine are more difficult to overcome. But, the possible gain in treatment quality of this kind of hybrid machine is expected to be even higher with protons than with photons, due to the difference in dose deposition profiles and the motion impact on dose distribution. MRI is probably the best choice of modality for such a continuous imaging application, due to its non-irradiant effect, and for its soft tissue contrast performances. In this context, we developed a tool to visualize and evaluate in real-time proton therapy Pencil Beam Scanning (PBS) treatment plans on dynamic images. This will allow us to compare different treatment strategies, using or not the simultaneous imaging and treatment possibility, and estimate the gain in treatment quality that could be expected from a PBS-MRI hybrid machine

    Locally tuned deformation fields combination for 2D cine-MRI-based driving of 3D motion models

    No full text
    Purpose To target mobile tumors in radiotherapy with the recent MR-Linac hardware solutions, research is being conducted to drive a 3D motion model with 2D cine-MRI to reproduce the breathing motion in 4D. This work presents a method to combine several deformation fields using local measures to better drive 3D motion models. Methods The method uses weight maps, each representing the proximity with a specific area of interest. The breathing state is evaluated on cine-MRI frames in these areas and a different deformation field is estimated for each using a 2D to 3D motion model. The different deformation fields are multiplied by their respective weight maps and combined to form the final field to apply to a reference image. A global motion model is adjusted locally on the selected areas and creates a 3DCT for each cine-MRI frame. Results The 13 patients on which it was tested showed on average an improvement of the accuracy of our model of 0.71 mm for areas selected to drive the model and 0.5 mm for other areas compared to our previous method without local adjustment. The additional computation time for each region was around 40 ms on a modern laptop. Conclusion The method improves the accuracy of the 2D-based driving of 3D motion models. It can be used on top of existing methods relying on deformation fields. It does add some computation time but, depending on the area to deform and the number of regions of interests, offers the potential of online use

    Continuous real time 3D motion reproduction using dynamic MRI and precomputed 4DCT deformation fields

    No full text
    Radiotherapy of mobile tumors requires specific imaging tools and models to reduce the impact of motion on the treatment. Online continuous nonionizing imaging has become possible with the recent development of magnetic resonance imaging devices combined with linear accelerators. This opens the way to new guided treatment methods based on the real‐time tracking of anatomical motion. In such devices, 2D fast MR‐images are well‐suited to capture and predict the real‐time motion of the tumor. To be used effectively in an adaptive radiotherapy, these MR images have to be combined with X‐ray images such as CT, which are necessary to compute the irradiation dose deposition. We therefore developed a method combining both image modalities to track the motion on MR images and reproduce the tracked motion on a sequence of 3DCT images in real‐time. It uses manually placed navigators to track organ interfaces in the image, making it possible to select anatomical object borders that are visible on both MRI and CT modalities and giving the operator precise control of the motion tracking quality. Precomputed deformation fields extracted from the 4DCT acquired in the planning phase are then used to deform existing 3DCT images to match the tracked object position, creating a new set of 3DCT images encompassing irregularities in the breathing pattern for the complete duration of the MRI acquisition. The final continuous reconstructed 4DCT image sequence reproduces the motion captured by the MRI sequence with high precision (difference below 2 mm)

    Semantic segmentation of computed tomography for radiotherapy with deep learning: compensating insufficient annotation quality using contour augmentation

    No full text
    In radiotherapy treatment planning, manual annotation of organs-at-risk and target volumes is a difficult and time-consuming task, prone to intra and inter-observer variabilities. Deep learning networks (DLNs) are gaining worldwide attention to automate such annotative tasks because of their ability to capture data hierarchy. However, for better performance DLNs require large number of data samples whereas annotated medical data is scarce. To remedy this, data augmentation is used to increase the training data for DLNs that enables robust learning by incorporating spatial/translational invariance into the training phase. Importantly, performance of DLNs is highly dependent on the ground truth (GT) quality: if manual annotation is not accurate enough, the network cannot learn better than the annotated example. This highlights the need to compensate for possibly insufficient GT quality using augmentation, i.e., by providing more GTs per image, in order to improve performance of DLNs. In this work, small random alterations were applied to GT and each altered GT was considered as an additional annotation. Contour augmentation was used to train a dilated U-Net in multiple GTs per image setting, which was tested on a pelvic CT dataset acquired from 67 patients to segment bladder and rectum in a multi-class segmentation setting. By using contour augmentation (coupled with data augmentation), the network learnt better than with data augmentation only, as it was able to correct slightly offset contours in GT. The segmentation results produced were quantified using spatial overlap, distance-based and probabilistic measures. The Dice score for bladder and rectum are reported as 0.88±0.19 and 0.89±0.04, whereas the average symmetric surface distance are 0.22±0.09 mm and 0.09±0.05 mm, respectively
    corecore