20 research outputs found

    Automated quantification and evaluation of motion artifact on coronary CT angiography images

    Get PDF
    Abstract Purpose This study developed and validated a Motion Artifact Quantification algorithm to automatically quantify the severity of motion artifacts on coronary computed tomography angiography (CCTA) images. The algorithm was then used to develop a Motion IQ Decision method to automatically identify whether a CCTA dataset is of sufficient diagnostic image quality or requires further correction. Method The developed Motion Artifact Quantification algorithm includes steps to identify the right coronary artery (RCA) regions of interest (ROIs), segment vessel and shading artifacts, and to calculate the motion artifact score (MAS) metric. The segmentation algorithms were verified against ground‐truth manual segmentations. The segmentation algorithms were also verified by comparing and analyzing the MAS calculated from ground‐truth segmentations and the algorithm‐generated segmentations. The Motion IQ Decision algorithm first identifies slices with unsatisfactory image quality using a MAS threshold. The algorithm then uses an artifact‐length threshold to determine whether the degraded vessel segment is large enough to cause the dataset to be nondiagnostic. An observer study on 30 clinical CCTA datasets was performed to obtain the ground‐truth decisions of whether the datasets were of sufficient image quality. A five‐fold cross‐validation was used to identify the thresholds and to evaluate the Motion IQ Decision algorithm. Results The automated segmentation algorithms in the Motion Artifact Quantification algorithm resulted in Dice coefficients of 0.84 for the segmented vessel regions and 0.75 for the segmented shading artifact regions. The MAS calculated using the automated algorithm was within 10% of the values obtained using ground‐truth segmentations. The MAS threshold and artifact‐length thresholds were determined by the ROC analysis to be 0.6 and 6.25 mm by all folds. The Motion IQ Decision algorithm demonstrated 100% sensitivity, 66.7% ± 27.9% specificity, and a total accuracy of 86.7% ± 12.5% for identifying datasets in which the RCA required correction. The Motion IQ Decision algorithm demonstrated 91.3% sensitivity, 71.4% specificity, and a total accuracy of 86.7% for identifying CCTA datasets that need correction for any of the three main vessels. Conclusion The Motion Artifact Quantification algorithm calculated accurate

    Measurement of Endotracheal Tube Positioning on Chest X-Ray Using Object Detection.

    No full text
    Patients who are intubated with endotracheal tubes often receive chest x-ray (CXR) imaging to determine whether the tube is correctly positioned. When these CXRs are interpreted by a radiologist, they evaluate whether the tube needs to be repositioned and typically provide a measurement in centimeters between the endotracheal tube tip and carina. In this project, a large dataset of endotracheal tube and carina bounding boxes was annotated on CXRs, and a machine-learning model was trained to generate these boxes on new CXRs and to calculate a distance measurement between the tube and carina. This model was applied to a gold standard annotated dataset, as well as to all prospective data passing through our radiology system for two weeks. Inter-radiologist variability was also measured on a test dataset. The distance measurements for both the gold standard dataset (mean error = 0.70 cm) and prospective dataset (mean error = 0.68 cm) were noninferior to inter-radiologist variability (mean error = 0.70 cm) within an equivalence bound of 0.1 cm. This suggests that this model performs at an accuracy similar to human measurements, and these distance calculations can be used for clinical report auto-population and/or worklist prioritization of severely malpositioned tubes

    Detection of Critical Spinal Epidural Lesions on CT Using Machine Learning.

    No full text
    BACKGROUND: Critical spinal epidural pathologies can cause paralysis or death if untreated. Although magnetic resonance imaging is the preferred modality for visualizing these pathologies, computed tomography (CT) occurs far more commonly than magnetic resonance imaging in the clinical setting. OBJECTIVE: A machine learning model was developed to screen for critical epidural lesions on CT images at a large-scale teleradiology practice. This model has utility for both worklist prioritization of emergent studies and identifying missed findings. MATERIALS AND METHODS: There were 153 studies with epidural lesions available for training. These lesions were segmented and used to train a machine learning model. A test data set was also created using previously missed epidural lesions. The trained model was then integrated into a teleradiology workflow for 90 days. Studies were sent to secondary manual review if the model detected an epidural lesion but none was mentioned in the clinical report. RESULTS: The model correctly identified 50.0% of epidural lesions in the test data set with 99.0% specificity. For prospective data, the model correctly prioritized 66.7% of the 18 epidural lesions diagnosed on the initial read with 98.9% specificity. There were 2.0 studies flagged for potential missed findings per day, and 17 missed epidural lesions were found during a 90-day time period. These results suggest almost half of critical spinal epidural lesions visible on CT imaging are being missed on initial diagnosis. CONCLUSION: A machine learning model for identifying spinal epidural hematomas and abscesses on CT can be implemented in a clinical workflow
    corecore