673 research outputs found

    Mesh-to-raster based non-rigid registration of multi-modal images

    Full text link
    Region of interest (ROI) alignment in medical images plays a crucial role in diagnostics, procedure planning, treatment, and follow-up. Frequently, a model is represented as triangulated mesh while the patient data is provided from CAT scanners as pixel or voxel data. Previously, we presented a 2D method for curve-to-pixel registration. This paper contributes (i) a general mesh-to-raster (M2R) framework to register ROIs in multi-modal images; (ii) a 3D surface-to-voxel application, and (iii) a comprehensive quantitative evaluation in 2D using ground truth provided by the simultaneous truth and performance level estimation (STAPLE) method. The registration is formulated as a minimization problem where the objective consists of a data term, which involves the signed distance function of the ROI from the reference image, and a higher order elastic regularizer for the deformation. The evaluation is based on quantitative light-induced fluoroscopy (QLF) and digital photography (DP) of decalcified teeth. STAPLE is computed on 150 image pairs from 32 subjects, each showing one corresponding tooth in both modalities. The ROI in each image is manually marked by three experts (900 curves in total). In the QLF-DP setting, our approach significantly outperforms the mutual information-based registration algorithm implemented with the Insight Segmentation and Registration Toolkit (ITK) and Elastix

    Evaluation of Motion Artifact Metrics for Coronary CT Angiography

    Get PDF
    Purpose This study quantified the performance of coronary artery motion artifact metrics relative to human observer ratings. Motion artifact metrics have been used as part of motion correction and best‐phase selection algorithms for Coronary Computed Tomography Angiography (CCTA). However, the lack of ground truth makes it difficult to validate how well the metrics quantify the level of motion artifact. This study investigated five motion artifact metrics, including two novel metrics, using a dynamic phantom, clinical CCTA images, and an observer study that provided ground‐truth motion artifact scores from a series of pairwise comparisons. Method Five motion artifact metrics were calculated for the coronary artery regions on both phantom and clinical CCTA images: positivity, entropy, normalized circularity, Fold Overlap Ratio (FOR), and Low‐Intensity Region Score (LIRS). CT images were acquired of a dynamic cardiac phantom that simulated cardiac motion and contained six iodine‐filled vessels of varying diameter and with regions of soft plaque and calcifications. Scans were repeated with different gantry start angles. Images were reconstructed at five phases of the motion cycle. Clinical images were acquired from 14 CCTA exams with patient heart rates ranging from 52 to 82 bpm. The vessel and shading artifacts were manually segmented by three readers and combined to create ground‐truth artifact regions. Motion artifact levels were also assessed by readers using a pairwise comparison method to establish a ground‐truth reader score. The Kendall\u27s Tau coefficients were calculated to evaluate the statistical agreement in ranking between the motion artifacts metrics and reader scores. Linear regression between the reader scores and the metrics was also performed. Results On phantom images, the Kendall\u27s Tau coefficients of the five motion artifact metrics were 0.50 (normalized circularity), 0.35 (entropy), 0.82 (positivity), 0.77 (FOR), 0.77(LIRS), where higher Kendall\u27s Tau signifies higher agreement. The FOR, LIRS, and transformed positivity (the fourth root of the positivity) were further evaluated in the study of clinical images. The Kendall\u27s Tau coefficients of the selected metrics were 0.59 (FOR), 0.53 (LIRS), and 0.21 (Transformed positivity). In the study of clinical data, a Motion Artifact Score, defined as the product of FOR and LIRS metrics, further improved agreement with reader scores, with a Kendall\u27s Tau coefficient of 0.65. Conclusion The metrics of FOR, LIRS, and the product of the two metrics provided the highest agreement in motion artifact ranking when compared to the readers, and the highest linear correlation to the reader scores. The validated motion artifact metrics may be useful for developing and evaluating methods to reduce motion in Coronary Computed Tomography Angiography (CCTA) images

    Evaluating and Improving 4D-CT Image Segmentation for Lung Cancer Radiotherapy

    Get PDF
    Lung cancer is a high-incidence disease with low survival despite surgical advances and concurrent chemo-radiotherapy strategies. Image-guided radiotherapy provides for treatment measures, however, significant challenges exist for imaging, treatment planning, and delivery of radiation due to the influence of respiratory motion. 4D-CT imaging is capable of improving image quality of thoracic target volumes influenced by respiratory motion. 4D-CT-based treatment planning strategies requires highly accurate anatomical segmentation of tumour volumes for radiotherapy treatment plan optimization. Variable segmentation of tumour volumes significantly contributes to uncertainty in radiotherapy planning due to a lack of knowledge regarding the exact shape of the lesion and difficulty in quantifying variability. As image-segmentation is one of the earliest tasks in the radiotherapy process, inherent geometric uncertainties affect subsequent stages, potentially jeopardizing patient outcomes. Thus, this work assesses and suggests strategies for mitigation of segmentation-related geometric uncertainties in 4D-CT-based lung cancer radiotherapy at pre- and post-treatment planning stages

    Automated quantification and evaluation of motion artifact on coronary CT angiography images

    Get PDF
    Abstract Purpose This study developed and validated a Motion Artifact Quantification algorithm to automatically quantify the severity of motion artifacts on coronary computed tomography angiography (CCTA) images. The algorithm was then used to develop a Motion IQ Decision method to automatically identify whether a CCTA dataset is of sufficient diagnostic image quality or requires further correction. Method The developed Motion Artifact Quantification algorithm includes steps to identify the right coronary artery (RCA) regions of interest (ROIs), segment vessel and shading artifacts, and to calculate the motion artifact score (MAS) metric. The segmentation algorithms were verified against ground‐truth manual segmentations. The segmentation algorithms were also verified by comparing and analyzing the MAS calculated from ground‐truth segmentations and the algorithm‐generated segmentations. The Motion IQ Decision algorithm first identifies slices with unsatisfactory image quality using a MAS threshold. The algorithm then uses an artifact‐length threshold to determine whether the degraded vessel segment is large enough to cause the dataset to be nondiagnostic. An observer study on 30 clinical CCTA datasets was performed to obtain the ground‐truth decisions of whether the datasets were of sufficient image quality. A five‐fold cross‐validation was used to identify the thresholds and to evaluate the Motion IQ Decision algorithm. Results The automated segmentation algorithms in the Motion Artifact Quantification algorithm resulted in Dice coefficients of 0.84 for the segmented vessel regions and 0.75 for the segmented shading artifact regions. The MAS calculated using the automated algorithm was within 10% of the values obtained using ground‐truth segmentations. The MAS threshold and artifact‐length thresholds were determined by the ROC analysis to be 0.6 and 6.25 mm by all folds. The Motion IQ Decision algorithm demonstrated 100% sensitivity, 66.7% ± 27.9% specificity, and a total accuracy of 86.7% ± 12.5% for identifying datasets in which the RCA required correction. The Motion IQ Decision algorithm demonstrated 91.3% sensitivity, 71.4% specificity, and a total accuracy of 86.7% for identifying CCTA datasets that need correction for any of the three main vessels. Conclusion The Motion Artifact Quantification algorithm calculated accurate

    A proposed framework for consensus-based lung tumour volume auto-segmentation in 4D computed tomography imaging.

    Get PDF
    This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51  ±  1.92) to (97.27  ±  0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development

    Bayesian Spatial Binary Regression for Label Fusion in Structural Neuroimaging

    Full text link
    Many analyses of neuroimaging data involve studying one or more regions of interest (ROIs) in a brain image. In order to do so, each ROI must first be identified. Since every brain is unique, the location, size, and shape of each ROI varies across subjects. Thus, each ROI in a brain image must either be manually identified or (semi-) automatically delineated, a task referred to as segmentation. Automatic segmentation often involves mapping a previously manually segmented image to a new brain image and propagating the labels to obtain an estimate of where each ROI is located in the new image. A more recent approach to this problem is to propagate labels from multiple manually segmented atlases and combine the results using a process known as label fusion. To date, most label fusion algorithms either employ voting procedures or impose prior structure and subsequently find the maximum a posteriori estimator (i.e., the posterior mode) through optimization. We propose using a fully Bayesian spatial regression model for label fusion that facilitates direct incorporation of covariate information while making accessible the entire posterior distribution. We discuss the implementation of our model via Markov chain Monte Carlo and illustrate the procedure through both simulation and application to segmentation of the hippocampus, an anatomical structure known to be associated with Alzheimer's disease.Comment: 24 pages, 10 figure

    An open, multi-vendor, multi-field-strength brain MR dataset and analysis of publicly available skull stripping methods agreement

    Get PDF
    This paper presents an open, multi-vendor, multi-field strength magnetic resonance (MR) T1-weighted volumetric brain imaging dataset, named Calgary-Campinas-359 (CC-359). The dataset is composed of images of older healthy adults (29-80 years) acquired on scanners from three vendors (Siemens, Philips and General Electric) at both 1.5 T and 3 T. CC-359 is comprised of 359 datasets, approximately 60 subjects per vendor and magnetic field strength. The dataset is approximately age and gender balanced, subject to the constraints of the available images. It provides consensus brain extraction masks for all volumes generated using supervised classification. Manual segmentation results for twelve randomly selected subjects performed by an expert are also provided. The CC-359 dataset allows investigation of 1) the influences of both vendor and magnetic field strength on quantitative analysis of brain MR; 2) parameter optimization for automatic segmentation methods; and potentially 3) machine learning classifiers with big data, specifically those based on deep learning methods, as these approaches require a large amount of data. To illustrate the utility of this dataset, we compared to the results of a supervised classifier, the results of eight publicly available skull stripping methods and one publicly available consensus algorithm. A linear mixed effects model analysis indicated that vendor (p - value < 0.001) and magnetic field strength (p - value < 0.001) have statistically significant impacts on skull stripping results170482494CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO - CNPQCOORDENAÇÃO DE APERFEIÇOAMENTO DE PESSOAL DE NÍVEL SUPERIOR - CAPESFUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULO - FAPESP311228/2014-3; 157534/2015-488881.062158/2014-012013/07559-3; 2013/23514-0; 2016/18332-

    Deep learning segmentation of triple-negative breast cancer (TNBC) patient derived tumor xenograft (PDX) and sensitivity of radiomic pipeline to tumor probability boundary

    Get PDF
    Preclinical magnetic resonance imaging (MRI) is a critical component in a co-clinical research pipeline. Importantly, segmentation of tumors in MRI is a necessary step in tumor phenotyping and assessment of response to therapy. However, manual segmentation is time-intensive and suffers from inter- and intra- observer variability and lack of reproducibility. This study aimed to develop an automated pipeline for accurate localization and delineation of TNBC PDX tumors from preclinical T1w and T2w MR images using a deep learning (DL) algorithm and to assess the sensitivity of radiomic features to tumor boundaries. We tested five network architectures including U-Net, dense U-Net, Res-Net, recurrent residual UNet (R2UNet), and dense R2U-Net (D-R2UNet), which were compared against manual delineation by experts. To mitigate bias among multiple experts, the simultaneous truth and performance level estimation (STAPLE) algorithm was applied to create consensus maps. Performance metrics (F1-Score, recall, precision, and AUC) were used to assess the performance of the networks. Multi-contrast D-R2UNet performed best with F1-score = 0.948; however, all networks scored within 1-3% of each other. Radiomic features extracted from D-R2UNet were highly corelated to STAPLE-derived features with 67.13% of T1w and 53.15% of T2w exhibiting correlation ρ ≄ 0.9
    • 

    corecore