21 research outputs found

    IMAGE-BASED RESPIRATORY MOTION EXTRACTION AND RESPIRATION-CORRELATED CONE BEAM CT (4D-CBCT) RECONSTRUCTION

    Get PDF
    Accounting for respiration motion during imaging helps improve targeting precision in radiation therapy. Respiratory motion can be a major source of error in determining the position of thoracic and upper abdominal tumor targets during radiotherapy. Thus, extracting respiratory motion is a key task in radiation therapy planning. Respiration-correlated or four-dimensional CT (4DCT) imaging techniques have been recently integrated into imaging systems for verifying tumor position during treatment and managing respiration-induced tissue motion. The quality of the 4D reconstructed volumes is highly affected by the respiratory signal extracted and the phase sorting method used. This thesis is divided into two parts. In the first part, two image-based respiratory signal extraction methods are proposed and evaluated. Those methods are able to extract the respiratory signals from CBCT images without using external sources, implanted markers or even dependence on any structure in the images such as the diaphragm. The first method, called Local Intensity Feature Tracking (LIFT), extracts the respiratory signal depending on feature points extracted and tracked through the sequence of projections. The second method, called Intensity Flow Dimensionality Reduction (IFDR), detects the respiration signal by computing the optical flow motion of every pixel in each pair of adjacent projections. Then, the motion variance in the optical flow dataset is extracted using linear and non-linear dimensionality reduction techniques to represent a respiratory signal. Experiments conducted on clinical datasets showed that the respiratory signal was successfully extracted using both proposed methods and it correlates well with standard respiratory signals such as diaphragm position and the internal markers’ signal. In the second part of this thesis, 4D-CBCT reconstruction based on different phase sorting techniques is studied. The quality of the 4D reconstructed images is evaluated and compared for different phase sorting methods such as internal markers, external markers and image-based methods (LIFT and IFDR). Also, a method for generating additional projections to be used in 4D-CBCT reconstruction is proposed to reduce the artifacts that result when reconstructing from an insufficient number of projections. Experimental results showed that the feasibility of the proposed method in recovering the edges and reducing the streak artifacts

    Dynamic CBCT Imaging using Prior Model-Free Spatiotemporal Implicit Neural Representation (PMF-STINR)

    Full text link
    Dynamic cone-beam computed tomography (CBCT) can capture high-spatial-resolution, time-varying images for motion monitoring, patient setup, and adaptive planning of radiotherapy. However, dynamic CBCT reconstruction is an extremely ill-posed spatiotemporal inverse problem, as each CBCT volume in the dynamic sequence is only captured by one or a few X-ray projections. We developed a machine learning-based technique, prior-model-free spatiotemporal implicit neural representation (PMF-STINR), to reconstruct dynamic CBCTs from sequentially acquired X-ray projections. PMF-STINR employs a joint image reconstruction and registration approach to address the under-sampling challenge. Specifically, PMF-STINR uses spatial implicit neural representation to reconstruct a reference CBCT volume, and it applies temporal INR to represent the intra-scan dynamic motion with respect to the reference CBCT to yield dynamic CBCTs. PMF-STINR couples the temporal INR with a learning-based B-spline motion model to capture time-varying deformable motion during the reconstruction. Compared with previous methods, the spatial INR, the temporal INR, and the B-spline model of PMF-STINR are all learned on the fly during reconstruction in a one-shot fashion, without using any patient-specific prior knowledge or motion sorting/binning. PMF-STINR was evaluated via digital phantom simulations, physical phantom measurements, and a multi-institutional patient dataset featuring various imaging protocols (half-fan/full-fan, full sampling/sparse sampling, different energy and mAs settings, etc.). The results showed that the one-shot learning-based PMF-STINR can accurately and robustly reconstruct dynamic CBCTs and capture highly irregular motion with high temporal (~0.1s) resolution and sub-millimeter accuracy. It can be a promising tool for motion management by offering richer motion information than traditional 4D-CBCTs

    On the investigation of a novel x-ray imaging techniques in radiation oncology

    Get PDF
    Radiation therapy is indicated for nearly 50% of cancer patients in Australia. Radiation therapy requires accurate delivery of ionising radiation to the neoplastic tissue and pre-treatment in situ x-ray imaging plays an important role in meeting treatment accuracy requirements. Four dimensional cone-beam computed tomography (4D CBCT) is one such pre-treatment imaging technique that can help to visualise tumour target motion due to breathing at the time of radiation treatment delivery. Measuring and characterising the target motion can help to ensure highly accurate therapeutic x-ray beam delivery. In this thesis, a novel pre-treatment x-ray imaging technique, called Respiratory Triggered 4D cone-beam Computed Tomography (RT 4D CBCT), is conceived and investigated. Specifically, the aim of this work is to progress the 4D CBCT imaging technology by investigating the use of a patient’s breathing signal to improve and optimise the use of imaging radiation in 4D CBCT to facilitate the accurate delivery of radiation therapy. These investigations are presented in three main studies: 1. Introduction to the concept of respiratory triggered four dimensional conebeam computed tomography. 2. A simulation study exploring the behaviour of RT 4D CBCT using patientmeasured respiratory data. 3. The experimental realisation of RT 4D CBCT working in a real-time acquisitions setting. The major finding from this work is that RT 4D CBCT can provide target motion information with a 50% reduction in the x-ray imaging dose applied to the patient

    Time dependent cone-beam CT reconstruction via a motion model optimized with forward iterative projection matching

    Get PDF
    The purpose of this work is to present the development and validation of a novel method for reconstructing time-dependent, or 4D, cone-beam CT (4DCBCT) images. 4DCBCT can have a variety of applications in the radiotherapy of moving targets, such as lung tumors, including treatment planning, dose verification, and real time treatment adaptation. However, in its current incarnation it suffers from poor reconstruction quality and limited temporal resolution that may restrict its efficacy. Our algorithm remedies these issues by deforming a previously acquired high quality reference fan-beam CT (FBCT) to match the projection data in the 4DCBCT data-set, essentially creating a 3D animation of the moving patient anatomy. This approach combines the high image quality of the FBCT with the fine temporal resolution of the raw 4DCBCT projection data-set. Deformation of the reference CT is accomplished via a patient specific motion model. The motion model is constrained spatially using eigenvectors generated by a principal component analysis (PCA) of patient motion data, and is regularized in time using parametric functions of a patient breathing surrogate recorded simultaneously with 4DCBCT acquisition. The parametric motion model is constrained using forward iterative projection matching (FIPM), a scheme which iteratively alters model parameters until digitally reconstructed radiographs (DRRs) cast through the deforming CT optimally match the projections in the raw 4DCBCT data-set. We term our method FIPM-PCA 4DCBCT. In developing our algorithm we proceed through three stages of development. In the first, we establish the mathematical groundwork for the algorithm and perform proof of concept testing on simulated data. In the second, we tune the algorithm for real world use; specifically we improve our DRR algorithm to achieve maximal realism by incorporating physical principles of image formation combined with empirical measurements of system properties. In the third stage we test our algorithm on actual patient data and evaluate its performance against gold standard and ground truth data-sets. In this phase we use our method to track the motion of an implanted fiducial marker and observe agreement with our gold standard data that is typically within a millimeter

    4D-Precise: learning-based 3D motion estimation and high temporal resolution 4DCT reconstruction from treatment 2D+t X-ray projections

    Get PDF
    Background and Objective In radiotherapy treatment planning, respiration-induced motion introduces uncertainty that, if not appropriately considered, could result in dose delivery problems. 4D cone-beam computed tomography (4D-CBCT) has been developed to provide imaging guidance by reconstructing a pseudo-motion sequence of CBCT volumes through binning projection data into breathing phases. However, it suffers from artefacts and erroneously characterizes the averaged breathing motion. Furthermore, conventional 4D-CBCT can only be generated post-hoc using the full sequence of kV projections after the treatment is complete, limiting its utility. Hence, our purpose is to develop a deep-learning motion model for estimating 3D+t CT images from treatment kV projection series. Methods We propose an end-to-end learning-based 3D motion modelling and 4DCT reconstruction model named 4D-Precise, abbreviated from Probabilistic reconstruction of image sequences from CBCT kV projections. The model estimates voxel-wise motion fields and simultaneously reconstructs a 3DCT volume at any arbitrary time point of the input projections by transforming a reference CT volume. Developing a Torch-DRR module, it enables end-to-end training by computing Digitally Reconstructed Radiographs (DRRs) in PyTorch. During training, DRRs with matching projection angles to the input kVs are automatically extracted from reconstructed volumes and their structural dissimilarity to inputs is penalised. We introduced a novel loss function to regulate spatio-temporal motion field variations across the CT scan, leveraging planning 4DCT for prior motion distribution estimation. Results The model is trained patient-specifically using three kV scan series, each including over 1200 angular/temporal projections, and tested on three other scan series. Imaging data from five patients are analysed here. Also, the model is validated on a simulated paired 4DCT-DRR dataset created using the Surrogate Parametrised Respiratory Motion Modelling (SuPReMo). The results demonstrate that the reconstructed volumes by 4D-Precise closely resemble the ground-truth volumes in terms of Dice, volume similarity, mean contour distance, and Hausdorff distance, whereas 4D-Precise achieves smoother deformations and fewer negative Jacobian determinants compared to SuPReMo. Conclusions Unlike conventional 4DCT reconstruction techniques that ignore breath inter-cycle motion variations, the proposed model computes both intra-cycle and inter-cycle motions. It represents motion over an extended timeframe, covering several minutes of kV scan series
    corecore