1,115 research outputs found

    Dynamic Cone-beam CT Reconstruction using Spatial and Temporal Implicit Neural Representation Learning (STINR)

    Full text link
    Objective: Dynamic cone-beam CT (CBCT) imaging is highly desired in image-guided radiation therapy to provide volumetric images with high spatial and temporal resolutions to enable applications including tumor motion tracking/prediction and intra-delivery dose calculation/accumulation. However, the dynamic CBCT reconstruction is a substantially challenging spatiotemporal inverse problem, due to the extremely limited projection sample available for each CBCT reconstruction (one projection for one CBCT volume). Approach: We developed a simultaneous spatial and temporal implicit neural representation (STINR) method for dynamic CBCT reconstruction. STINR mapped the unknown image and the evolution of its motion into spatial and temporal multi-layer perceptrons (MLPs), and iteratively optimized the neuron weighting of the MLPs via acquired projections to represent the dynamic CBCT series. In addition to the MLPs, we also introduced prior knowledge, in form of principal component analysis (PCA)-based patient-specific motion models, to reduce the complexity of the temporal INRs to address the ill-conditioned dynamic CBCT reconstruction problem. We used the extended cardiac torso (XCAT) phantom to simulate different lung motion/anatomy scenarios to evaluate STINR. The scenarios contain motion variations including motion baseline shifts, motion amplitude/frequency variations, and motion non-periodicity. The scenarios also contain inter-scan anatomical variations including tumor shrinkage and tumor position change. Main results: STINR shows consistently higher image reconstruction and motion tracking accuracy than a traditional PCA-based method and a polynomial-fitting based neural representation method. STINR tracks the lung tumor to an averaged center-of-mass error of <2 mm, with corresponding relative errors of reconstructed dynamic CBCTs <10%

    Automated Image-Based Procedures for Adaptive Radiotherapy

    Get PDF

    Deformation analysis of surface and bronchial structures in intraoperative pneumothorax using deformable mesh registration

    Get PDF
    The positions of nodules can change because of intraoperative lung deflation, and the modeling of pneumothorax-associated deformation remains a challenging issue for intraoperative tumor localization. In this study, we introduce spatial and geometric analysis methods for inflated/deflated lungs and discuss heterogeneity in pneumothorax-associated lung deformation. Contrast-enhanced CT images simulating intraoperative conditions were acquired from live Beagle dogs. The images contain the overall shape of the lungs, including all lobes and internal bronchial structures, and were analyzed to provide a statistical deformation model that could be used as prior knowledge to predict pneumothorax. To address the difficulties of mapping pneumothorax CT images with topological changes and CT intensity shifts, we designed deformable mesh registration techniques for mixed data structures including the lobe surfaces and the bronchial centerlines. Three global-to-local registration steps were performed under the constraint that the deformation was spatially continuous and smooth, while matching visible bronchial tree structures as much as possible. The developed framework achieved stable registration with a Hausdorff distance of less than 1 mm and a target registration error of less than 5 mm, and visualized deformation fields that demonstrate per-lobe contractions and rotations with high variability between subjects. The deformation analysis results show that the strain of lung parenchyma was 35% higher than that of bronchi, and that deformation in the deflated lung is heterogeneous

    Respiratory organ motion in interventional MRI : tracking, guiding and modeling

    Get PDF
    Respiratory organ motion is one of the major challenges in interventional MRI, particularly in interventions with therapeutic ultrasound in the abdominal region. High-intensity focused ultrasound found an application in interventional MRI for noninvasive treatments of different abnormalities. In order to guide surgical and treatment interventions, organ motion imaging and modeling is commonly required before a treatment start. Accurate tracking of organ motion during various interventional MRI procedures is prerequisite for a successful outcome and safe therapy. In this thesis, an attempt has been made to develop approaches using focused ultrasound which could be used in future clinically for the treatment of abdominal organs, such as the liver and the kidney. Two distinct methods have been presented with its ex vivo and in vivo treatment results. In the first method, an MR-based pencil-beam navigator has been used to track organ motion and provide the motion information for acoustic focal point steering, while in the second approach a hybrid imaging using both ultrasound and magnetic resonance imaging was combined for advanced guiding capabilities. Organ motion modeling and four-dimensional imaging of organ motion is increasingly required before the surgical interventions. However, due to the current safety limitations and hardware restrictions, the MR acquisition of a time-resolved sequence of volumetric images is not possible with high temporal and spatial resolution. A novel multislice acquisition scheme that is based on a two-dimensional navigator, instead of a commonly used pencil-beam navigator, was devised to acquire the data slices and the corresponding navigator simultaneously using a CAIPIRINHA parallel imaging method. The acquisition duration for four-dimensional dataset sampling is reduced compared to the existing approaches, while the image contrast and quality are improved as well. Tracking respiratory organ motion is required in interventional procedures and during MR imaging of moving organs. An MR-based navigator is commonly used, however, it is usually associated with image artifacts, such as signal voids. Spectrally selective navigators can come in handy in cases where the imaging organ is surrounding with an adipose tissue, because it can provide an indirect measure of organ motion. A novel spectrally selective navigator based on a crossed-pair navigator has been developed. Experiments show the advantages of the application of this novel navigator for the volumetric imaging of the liver in vivo, where this navigator was used to gate the gradient-recalled echo sequence

    Improving Quantification in Lung PET/CT for the Evaluation of Disease Progression and Treatment Effectiveness

    Get PDF
    Positron Emission Tomography (PET) allows imaging of functional processes in vivo by measuring the distribution of an administered radiotracer. Whilst one of its main uses is directed towards lung cancer, there is an increased interest in diffuse lung diseases, for which the incidences rise every year, mainly due to environmental reasons and population ageing. However, PET acquisitions in the lung are particularly challenging due to several effects, including the inevitable cardiac and respiratory motion and the loss of spatial resolution due to low density, causing increased positron range. This thesis will focus on Idiopathic Pulmonary Fibrosis (IPF), a disease whose aetiology is poorly understood while patient survival is limited to a few years only. Contrary to lung tumours, this diffuse lung disease modifies the lung architecture more globally. The changes result in small structures with varying densities. Previous work has developed data analysis techniques addressing some of the challenges of imaging patients with IPF. However, robust reconstruction techniques are still necessary to obtain quantitative measures for such data, where it should be beneficial to exploit recent advances in PET scanner hardware such as Time of Flight (TOF) and respiratory motion monitoring. Firstly, positron range in the lung will be discussed, evaluating its effect in density-varying media, such as fibrotic lung. Secondly, the general effect of using incorrect attenuation data in lung PET reconstructions will be assessed. The study will compare TOF and non-TOF reconstructions and quantify the local and global artefacts created by data inconsistencies and respiratory motion. Then, motion compensation will be addressed by proposing a method which takes into account the changes of density and activity in the lungs during the respiration, via the estimation of the volume changes using the deformation fields. The method is evaluated on late time frame PET acquisitions using ¹⁸F-FDG where the radiotracer distribution has stabilised. It is then used as the basis for a method for motion compensation of the early time frames (starting with the administration of the radiotracer), leading to a technique that could be used for motion compensation of kinetic measures. Preliminary results are provided for kinetic parameters extracted from short dynamic data using ¹⁸F-FDG

    Improving the Accuracy of CT-derived Attenuation Correction in Respiratory-Gated PET/CT Imaging

    Get PDF
    The effect of respiratory motion on attenuation correction in Fludeoxyglucose (18F) positron emission tomography (FDG-PET) was investigated. Improvements to the accuracy of computed tomography (CT) derived attenuation correction were obtained through the alignment of the attenuation map to each emission image in a respiratory gated PET scan. Attenuation misalignment leads to artefacts in the reconstructed PET image and several methods were devised for evaluating the attenuation inaccuracies caused by this. These methods of evaluation were extended to finding the frame in the respiratory gated PET which best matched the CT. This frame was then used as a reference frame in mono-modality compensation for misalignment. Attenuation correction was found to affect the quantification of tumour volumes; thus a regional analysis was used to evaluate the impact of mismatch and the benefits of compensating for misalignment. Deformable image registration was used to compensate for misalignment, however, there were inaccuracies caused by the poor signal-to-noise ratio (SNR) in PET images. Two models were developed that were robust to a poor SNR allowing for the estimation of deformation from very noisy images. Firstly, a cross population model was developed by statistically analysing the respiratory motion in 10 4DCT scans. Secondly, a 1D model of respiration was developed based on the physiological function of respiration. The 1D approach correctly modelled the expansion and contraction of the lungs and the differences in the compressibility of lungs and surrounding tissues. Several additional models were considered but were ruled out based on their poor goodness of fit to 4DCT scans. Approaches to evaluating the developed models were also used to assist with optimising for the most accurate attenuation correction. It was found that the multimodality registration of the CT image to the PET image was the most accurate approach to compensating for attenuation correction mismatch. Mono-modality image registration was found to be the least accurate approach, however, incorporating a motion model improved the accuracy of image registration. The significance of these findings is twofold. Firstly, it was found that motion models are required to improve the accuracy in compensating for attenuation correction mismatch and secondly, a validation method was found for comparing approaches to compensating for attenuation mismatch

    A Survey on Deep Learning in Medical Image Registration: New Technologies, Uncertainty, Evaluation Metrics, and Beyond

    Full text link
    Over the past decade, deep learning technologies have greatly advanced the field of medical image registration. The initial developments, such as ResNet-based and U-Net-based networks, laid the groundwork for deep learning-driven image registration. Subsequent progress has been made in various aspects of deep learning-based registration, including similarity measures, deformation regularizations, and uncertainty estimation. These advancements have not only enriched the field of deformable image registration but have also facilitated its application in a wide range of tasks, including atlas construction, multi-atlas segmentation, motion estimation, and 2D-3D registration. In this paper, we present a comprehensive overview of the most recent advancements in deep learning-based image registration. We begin with a concise introduction to the core concepts of deep learning-based image registration. Then, we delve into innovative network architectures, loss functions specific to registration, and methods for estimating registration uncertainty. Additionally, this paper explores appropriate evaluation metrics for assessing the performance of deep learning models in registration tasks. Finally, we highlight the practical applications of these novel techniques in medical imaging and discuss the future prospects of deep learning-based image registration

    Improving the Accuracy of CT-derived Attenuation Correction in Respiratory-Gated PET/CT Imaging

    Get PDF
    The effect of respiratory motion on attenuation correction in Fludeoxyglucose (18F) positron emission tomography (FDG-PET) was investigated. Improvements to the accuracy of computed tomography (CT) derived attenuation correction were obtained through the alignment of the attenuation map to each emission image in a respiratory gated PET scan. Attenuation misalignment leads to artefacts in the reconstructed PET image and several methods were devised for evaluating the attenuation inaccuracies caused by this. These methods of evaluation were extended to finding the frame in the respiratory gated PET which best matched the CT. This frame was then used as a reference frame in mono-modality compensation for misalignment. Attenuation correction was found to affect the quantification of tumour volumes; thus a regional analysis was used to evaluate the impact of mismatch and the benefits of compensating for misalignment. Deformable image registration was used to compensate for misalignment, however, there were inaccuracies caused by the poor signal-to-noise ratio (SNR) in PET images. Two models were developed that were robust to a poor SNR allowing for the estimation of deformation from very noisy images. Firstly, a cross population model was developed by statistically analysing the respiratory motion in 10 4DCT scans. Secondly, a 1D model of respiration was developed based on the physiological function of respiration. The 1D approach correctly modelled the expansion and contraction of the lungs and the differences in the compressibility of lungs and surrounding tissues. Several additional models were considered but were ruled out based on their poor goodness of fit to 4DCT scans. Approaches to evaluating the developed models were also used to assist with optimising for the most accurate attenuation correction. It was found that the multimodality registration of the CT image to the PET image was the most accurate approach to compensating for attenuation correction mismatch. Mono-modality image registration was found to be the least accurate approach, however, incorporating a motion model improved the accuracy of image registration. The significance of these findings is twofold. Firstly, it was found that motion models are required to improve the accuracy in compensating for attenuation correction mismatch and secondly, a validation method was found for comparing approaches to compensating for attenuation mismatch

    Computational methods for the analysis of functional 4D-CT chest images.

    Get PDF
    Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention
    corecore