8 research outputs found

    Evaluating the accuracy of 4D-CT ventilation imaging: First comparison with Technegas SPECT ventilation.

    Get PDF
    PURPOSE: Computed tomography ventilation imaging (CTVI) is a highly accessible functional lung imaging modality that can unlock the potential for functional avoidance in lung cancer radiation therapy. Previous attempts to validate CTVI against clinical ventilation single-photon emission computed tomography (V-SPECT) have been hindered by radioaerosol clumping artifacts. This work builds on those studies by performing the first comparison of CTVI with 99m Tc-carbon ('Technegas'), a clinical V-SPECT modality featuring smaller radioaerosol particles with less clumping. METHODS: Eleven lung cancer radiotherapy patients with early stage (T1/T2N0) disease received treatment planning four-dimensional CT (4DCT) scans paired with Technegas V/Q-SPECT/CT. For each patient, we applied three different CTVI methods. Two of these used deformable image registration (DIR) to quantify breathing-induced lung density changes (CTVIDIR-HU ), or breathing-induced lung volume changes (CTVIDIR-Jac ) between the 4DCT exhale/inhale phases. A third method calculated the regional product of air-tissue densities (CTVIHU ) and did not involve DIR. Corresponding CTVI and V-SPECT scans were compared using the Dice similarity coefficient (DSC) for functional defect and nondefect regions, as well as the Spearman's correlation r computed over the whole lung. The DIR target registration error (TRE) was quantified using both manual and computer-selected anatomic landmarks. RESULTS: Interestingly, the overall best performing method (CTVIHU ) did not involve DIR. For nondefect regions, the CTVIHU , CTVIDIR-HU , and CTVIDIR-Jac methods achieved mean DSC values of 0.69, 0.68, and 0.54, respectively. For defect regions, the respective DSC values were moderate: 0.39, 0.33, and 0.44. The Spearman r-values were generally weak: 0.26 for CTVIHU , 0.18 for CTVIDIR-HU , and -0.02 for CTVIDIR-Jac . The spatial accuracy of CTVI was not significantly correlated with TRE, however the DIR accuracy itself was poor with TRE > 3.6 mm on average, potentially indicative of poor quality 4DCT. Q-SPECT scans achieved good correlations with V-SPECT (mean r > 0.6), suggesting that the image quality of Technegas V-SPECT was not a limiting factor in this study. CONCLUSIONS: We performed a validation of CTVI using clinically available 4DCT and Technegas V/Q-SPECT for 11 lung cancer patients. The results reinforce earlier findings that the spatial accuracy of CTVI exhibits significant interpatient and intermethod variability. We propose that the most likely factor affecting CTVI accuracy was poor image quality of clinical 4DCT

    Image Registration for Quantitative Parametric Response Mapping of Cancer Treatment Response

    Get PDF
    AbstractImaging biomarkers capable of early quantification of tumor response to therapy would provide an opportunity to individualize patient care. Image registration of longitudinal scans provides a method of detecting treatment-associated changes within heterogeneous tumors by monitoring alterations in the quantitative value of individual voxels over time, which is unattainable by traditional volumetric-based histogram methods. The concepts involved in the use of image registration for tracking and quantifying breast cancer treatment response using parametric response mapping (PRM), a voxel-based analysis of diffusion-weighted magnetic resonance imaging (DW-MRI) scans, are presented. Application of PRM to breast tumor response detection is described, wherein robust registration solutions for tracking small changes in water diffusivity in breast tumors during therapy are required. Methodologies that employ simulations are presented for measuring expected statistical accuracy of PRM for response assessment. Test-retest clinical scans are used to yield estimates of system noise to indicate significant changes in voxel-based changes in water diffusivity. Overall, registration-based PRM image analysis provides significant opportunities for voxel-based image analysis to provide the required accuracy for early assessment of response to treatment in breast cancer patients receiving neoadjuvant chemotherapy

    Motion Calculations on Stent Grafts in AAA

    Get PDF
    Endovascular aortic repair (EVAR) is a technique which uses stent grafts to treat aortic aneurysms in patients at risk of aneurysm rupture. Although this technique has been shown to be very successful on the short term, the long term results are less optimistic due to failure of the stent graft. The pulsating blood flow applies stresses and forces to the stent graft, which can cause problems such as breakage, leakage, and migration. Therefore it is of importance to gain more insight into the in vivo motion behavior of these devices. If we know more about the motion patterns in well-behaved stent graft as well as ill-behaving devices, we shall be better able to distinguish between these type of behaviors These insights will enable us to detect stent-related problems and might even be used to predict problems beforehand. Further, these insights will help in designing the next generation stent grafts. Firstly, this work discusses the applicability of ECG-gated CT for measuring the motions of stent grafts in AAA. Secondly, multiple methods to segment the stent graft from these data are discussed. Thirdly, this work proposes a method that uses image registration to apply motion to the segmented stent mode

    Deep Learning-Based Regression and Classification for Automatic Landmark Localization in Medical Images

    Get PDF
    In this study, we propose a fast and accurate method to automatically localize anatomical landmarks in medical images. We employ a global-to-local localization approach using fully convolutional neural networks (FCNNs). First, a global FCNN localizes multiple landmarks through the analysis of image patches, performing regression and classification simultaneously. In regression, displacement vectors pointing from the center of image patches towards landmark locations are determined. In classification, presence of landmarks of interest in the patch is established. Global landmark locations are obtained by averaging the predicted displacement vectors, where the contribution of each displacement vector is weighted by the posterior classification probability of the patch that it is pointing from. Subsequently, for each landmark localized with global localization, local analysis is performed. Specialized FCNNs refine the global landmark locations by analyzing local sub-images in a similar manner, i.e. by performing regression and classification simultaneously and combining the results. Evaluation was performed through localization of 8 anatomical landmarks in CCTA scans, 2 landmarks in olfactory MR scans, and 19 landmarks in cephalometric X-rays. We demonstrate that the method performs similarly to a second observer and is able to localize landmarks in a diverse set of medical images, differing in image modality, image dimensionality, and anatomical coverage.Comment: 12 pages, accepted at IEEE transactions in Medical Imagin

    Validating and improving CT ventilation imaging by correlating with ventilation 4D-PET/CT using 68Ga-labeled nanoparticles.

    Get PDF
    PURPOSE: CT ventilation imaging is a novel functional lung imaging modality based on deformable image registration. The authors present the first validation study of CT ventilation using positron emission tomography with (68)Ga-labeled nanoparticles (PET-Galligas). The authors quantify this agreement for different CT ventilation metrics and PET reconstruction parameters. METHODS: PET-Galligas ventilation scans were acquired for 12 lung cancer patients using a four-dimensional (4D) PET/CT scanner. CT ventilation images were then produced by applying B-spline deformable image registration between the respiratory correlated phases of the 4D-CT. The authors test four ventilation metrics, two existing and two modified. The two existing metrics model mechanical ventilation (alveolar air-flow) based on Hounsfield unit (HU) change (VHU) or Jacobian determinant of deformation (VJac). The two modified metrics incorporate a voxel-wise tissue-density scaling (ρVHU and ρVJac) and were hypothesized to better model the physiological ventilation. In order to assess the impact of PET image quality, comparisons were performed using both standard and respiratory-gated PET images with the former exhibiting better signal. Different median filtering kernels (σm = 0 or 3 mm) were also applied to all images. As in previous studies, similarity metrics included the Spearman correlation coefficient r within the segmented lung volumes, and Dice coefficient d20 for the (0 - 20)th functional percentile volumes. RESULTS: The best agreement between CT and PET ventilation was obtained comparing standard PET images to the density-scaled HU metric (ρVHU) with σm = 3 mm. This leads to correlation values in the ranges 0.22 ≤ r ≤ 0.76 and 0.38 ≤ d20 ≤ 0.68, with r = 0.42 ± 0.16 and d20 = 0.52 ± 0.09 averaged over the 12 patients. Compared to Jacobian-based metrics, HU-based metrics lead to statistically significant improvements in r and d20 (p < 0.05), with density scaled metrics also showing higher r than for unscaled versions (p < 0.02). r and d20 were also sensitive to image quality, with statistically significant improvements using standard (as opposed to gated) PET images and with application of median filtering. CONCLUSIONS: The use of modified CT ventilation metrics, in conjunction with PET-Galligas and careful application of image filtering has resulted in improved correlation compared to earlier studies using nuclear medicine ventilation. However, CT ventilation and PET-Galligas do not always provide the same functional information. The authors have demonstrated that the agreement can improve for CT ventilation metrics incorporating a tissue density scaling, and also with increasing PET image quality. CT ventilation imaging has clear potential for imaging regional air volume change in the lung, and further development is warranted

    Calculation of Inter- and Intra-Fraction Motion Errors at External Radiotherapy Using a Markerless Strategy Based on Image Registration Combined with Correlation Model

    Get PDF
    Introduction: A new method based on image registration technique and an intelligent correlation model to calculate. The present study aimed to propose inter- and intra-fraction motion errors in order to address the limitations of conventional Patient positioning methods. Material and Methods: The configuration of the markerless method was accomplished by using four-dimensional computed tomography (4DCT) datasets. Firstly, the MeVisLab software package was used to extract a three-dimensional (3D) surface model of the patient and determine the tumor location. Then, the patient-specific 3D surface model which also included the breathing phases was imported into the MATLAB software package in order to define several control points on the thorax region as virtual external markers. Finally, based on the correlation of breathing signals/patient position with breathing signals/tumor coordinate, an adaptive neuro fuzzy inference system was proposed to both verify and align the inter- and intra-fraction motion errors in radiotherapy, if needed. In order to validate the proposed method, the 4DCT data acquired from four real patients was considered. Results: Final results revealed that our hybrid configuration method was capable of aligning patient setup with lower uncertainties, compared to other available methods. In addition, the 3D root-mean-square error has been reduced from 5.26 to 1.5 mm for all patients. Conclusion: In this study, a markerless method based on the image registration technique in combination with a correlation model was proposed to address the limitations of the available methods, including dependence on operator’s attention, use of passive markers, and rigid-only constraint for patient setup

    Methods for three-dimensional Registration of Multimodal Abdominal Image Data

    Get PDF
    Multimodal image registration benefits the diagnosis, treatment planning and the performance of image-guided procedures in the liver, since it enables the fusion of complementary information provided by pre- and intrainterventional data about tumor localization and access. Although there exist various registration methods, approaches which are specifically optimized for the registration of multimodal abdominal scans are only scarcely available. The work presented in this thesis aims to tackle this problem by focusing on the development, optimization and evaluation of registration methods specifically for the registration of multimodal liver scans. The contributions to the research field of medical image registration include the development of a registration evaluation methodology that enables the comparison and optimization of linear and non-linear registration algorithms using a point-based accuracy measure. This methodology has been used to benchmark standard registration methods as well as novel approaches that were developed within the frame of this thesis. The results of the methodology showed that the employed similarity measure used during the registration has a major impact on the registration accuracy of the method. Due to this influence, two alternative similarity metrics bearing the potential to be used on multimodal image data are proposed and evaluated. The first metric relies on the use of gradient information in form of Histograms of Oriented Gradients (HOG) whereas the second metric employs a siamese neural network to learn a similarity measure directly on the image data. The evaluation showed, that both metrics could compete with state of the art similarity measures in terms of registration accuracy. The HOG-metric offers the advantage that it does not require ground truth data to learn a similarity estimation, but instead it is applicable to various data sets with the sole requirement of distinct gradients. However, the Siamese metric is characterized by a higher robustness for large rotations than the HOG-metric. To train such a network, registered ground truth data is required which may be critical for multimodal image data. Yet, the results show that it is possible to apply models trained on registered synthetic data on real patient data. The last part of this thesis focuses on methods to learn an entire registration process using neural networks, thereby offering the advantage to replace the traditional, time-consuming iterative registration procedure. Within the frame of this thesis, the so-called VoxelMorph network which was originally proposed for monomodal, non-linear registration learning is extended for affine and multimodal registration learning tasks. This extension includes the consideration of an image mask during metric evaluation as well as loss functions for multimodal data, such as the pretrained Siamese metric and a loss relying on the comparison of deformation fields. Based on the developed registration evaluation methodology, the performance of the original network as well as the extended variants are evaluated for monomodal and multimodal registration tasks using multiple data sets. With the extended network variants, it is possible to learn an entire multimodal registration process for the correction of large image displacements. As for the Siamese metric, the results imply a general transferability of models trained with synthetic data to registration tasks including real patient data. Due to the lack of multimodal ground truth data, this transfer represents an important step towards making Deep Learning based registration procedures clinically usable

    Semi-automatic construction of reference standards for evaluation of image registration

    No full text
    Contains fulltext : 98338.pdf (Publisher’s version ) (Open Access)Quantitative evaluation of image registration algorithms is a difficult and under-addressed issue due to the lack of a reference standard in most registration problems. In this work a method is presented whereby detailed reference standard data may be constructed in an efficient semi-automatic fashion. A well-distributed set of n landmarks is detected fully automatically in one scan of a pair to be registered. Using a custom-designed interface, observers define corresponding anatomic locations in the second scan for a specified subset of s of these landmarks. The remaining n-s landmarks are matched fully automatically by a thin-plate-spline based system using the s manual landmark correspondences to model the relationship between the scans. The method is applied to 47 pairs of temporal thoracic CT scans, three pairs of brain MR scans and five thoracic CT datasets with synthetic deformations. Interobserver differences are used to demonstrate the accuracy of the matched points. The utility of the reference standard data as a tool in evaluating registration is shown by the comparison of six sets of registration results on the 47 pairs of thoracic CT data
    corecore