7 research outputs found
Respiratory motion modelling for MR-guided lung cancer radiotherapy: model development and geometric accuracy evaluation
Respiratory motion of lung tumours and adjacent structures is challenging for radiotherapy. Online MR-imaging cannot currently provide real-time volumetric information of the moving patient anatomy, therefore limiting precise dose delivery, delivered dose reconstruction, and downstream adaptation methods. 

Approach: We tailor a respiratory motion modelling framework towards an MR-Linac workflow to estimate the time-resolved 4D motion from real-time data. We develop a multi-slice acquisition scheme which acquires thick, overlapping 2D motion-slices in different locations and orientations, interleaved with 2D surrogate-slices from a fixed location. The framework fits a motion model directly to the input data without the need for sorting or binning to account for inter- and intra-cycle variation of the breathing motion. The framework alternates between model fitting and motion-compensated super-resolution image reconstruction to recover a high-quality motion-free image and a motion model. The fitted model can then estimate the 4D motion from 2D surrogate-slices. The framework is applied to four simulated anthropomorphic datasets and evaluated against known ground truth anatomy and motion. Clinical applicability is demonstrated by applying our framework to eight datasets acquired on an MR-Linac from four lung cancer patients. 

Main results: The framework accurately reconstructs high-quality motion-compensated 3D images with 2mm3isotropic voxels. For the simulated case with the largest target motion, the motion model achieved a mean deformation field error of 1.13mm. For the patient cases residual error registrations estimate the model error to be 1.07mm (1.64mm), 0.91mm (1.32mm), and 0.88m (1.33mm) in superior-inferior, anterior-posterior, and left-right directions respectively for the building (application) data.

Significance: The motion modelling framework estimates the patient motion with high accuracy and accurately reconstructs the anatomy. The image acquisition scheme can be flexibly integrated into an MR-Linac workflow whilst maintaining the capability of online motion-management strategies based on cine imaging such as target tracking and/or gating.
Initial Clinical Experience of MR-Guided Radiotherapy for Non-Small Cell Lung Cancer.
Curative-intent radiotherapy plays an integral role in the treatment of lung cancer and therefore improving its therapeutic index is vital. MR guided radiotherapy (MRgRT) systems are the latest technological advance which may help with achieving this aim. The majority of MRgRT treatments delivered to date have been stereotactic body radiation therapy (SBRT) based and include the treatment of (ultra-) central tumors. However, there is a move to also implement MRgRT as curative-intent treatment for patients with inoperable locally advanced NSCLC. This paper presents the initial clinical experience of using the two commercially available systems to date: the ViewRay MRIdian and Elekta Unity. The challenges and potential solutions associated with MRgRT in lung cancer will also be highlighted
Recommended from our members
Respiratory motion modelling for MR-guided lung cancer radiotherapy: model development and geometric accuracy evaluation.
Funder: NIHR Biomedical Research Centre, Royal Marsden NHS Foundation Trust/Institute of Cancer Research; doi: https://doi.org/10.13039/100014461Objective.Respiratory motion of lung tumours and adjacent structures is challenging for radiotherapy. Online MR-imaging cannot currently provide real-time volumetric information of the moving patient anatomy, therefore limiting precise dose delivery, delivered dose reconstruction, and downstream adaptation methods.Approach.We tailor a respiratory motion modelling framework towards an MR-Linac workflow to estimate the time-resolved 4D motion from real-time data. We develop a multi-slice acquisition scheme which acquires thick, overlapping 2D motion-slices in different locations and orientations, interleaved with 2D surrogate-slices from a fixed location. The framework fits a motion model directly to the input data without the need for sorting or binning to account for inter- and intra-cycle variation of the breathing motion. The framework alternates between model fitting and motion-compensated super-resolution image reconstruction to recover a high-quality motion-free image and a motion model. The fitted model can then estimate the 4D motion from 2D surrogate-slices. The framework is applied to four simulated anthropomorphic datasets and evaluated against known ground truth anatomy and motion. Clinical applicability is demonstrated by applying our framework to eight datasets acquired on an MR-Linac from four lung cancer patients.Main results.The framework accurately reconstructs high-quality motion-compensated 3D images with 2 mm3isotropic voxels. For the simulated case with the largest target motion, the motion model achieved a mean deformation field error of 1.13 mm. For the patient cases residual error registrations estimate the model error to be 1.07 mm (1.64 mm), 0.91 mm (1.32 mm), and 0.88 mm (1.33 mm) in superior-inferior, anterior-posterior, and left-right directions respectively for the building (application) data.Significance.The motion modelling framework estimates the patient motion with high accuracy and accurately reconstructs the anatomy. The image acquisition scheme can be flexibly integrated into an MR-Linac workflow whilst maintaining the capability of online motion-management strategies based on cine imaging such as target tracking and/or gating
Rapid 4D-MRI reconstruction using a deep radial convolutional neural network: Dracula
Background and Purpose: 4D and midposition MRI could inform plan adaptation in lung and abdominal MR-guided radiotherapy. We present deep learning-based solutions to overcome long 4D-MRI reconstruction times while maintaining high image quality and short scan times. Methods: Two 3D U-net deep convolutional neural networks were trained to accelerate the 4D joint MoCo-HDTV reconstruction. For the first network, gridded and joint MoCo-HDTV-reconstructed 4D-MRI were used as input and target data, respectively, whereas the second network was trained to directly calculate the midposition image. For both networks, input and target data had dimensions of 256 × 256 voxels (2D) and 16 respiratory phases. Deep learning-based MRI were verified against joint MoCo-HDTV-reconstructed MRI using the structural similarity index (SSIM) and the naturalness image quality evaluator (NIQE). Moreover, two experienced observers contoured the gross tumour volume and scored the images in a blinded study. Results: For 12 subjects, previously unseen by the networks, high-quality 4D and midposition MRI (1.25 × 1.25 × 3.3 mm3) were each reconstructed from gridded images in only 28 seconds per subject. Excellent agreement was found between deep-learning-based and joint MoCo-HDTV-reconstructed MRI (average SSIM ≥ 0.96, NIQE scores 7.94 and 5.66). Deep-learning-based 4D-MRI were clinically acceptable for target and organ-at-risk delineation. Tumour positions agreed within 0.7 mm on midposition images. Conclusion: Our results suggest that the joint MoCo-HDTV and midposition algorithms can each be approximated by a deep convolutional neural network. This rapid reconstruction of 4D and midposition MRI facilitates online treatment adaptation in thoracic or abdominal MR-guided radiotherapy