4 research outputs found

    Machine Learning by Ultrasonography for Risk Stratification of Axillary Breast Lymph Nodes

    Get PDF
    Introduction: Breast cancer is the most common cancer among women worldwide, and ultrasonography has been an essential tool in the management of breast cancer. In order to improve upon ultrasonography efficacy, establishing a machine learning image analysis model would provide additional prognostic and diagnostic factors in the evaluation, monitoring, and treatment of breast cancer. Methods: This retrospective study utilizes axillary lymph node ultrasound images of patients at Thomas Jefferson University Hospital who have had axillary lymph node biopsies. Automated machine learning of the images was performed on AutoML Vision; Google LLC which generated custom models for classification. About 80% of images were utilized to train the model, about 10% for validation, and about 10% to test accuracy. Outcomes involve model classification as high or low risk lymph nodes, while model accuracy was compared to biopsy and histopathology results. Results: Results are incomplete as the model is continuing to be trained, but results are anticipated to demonstrate high specificity when classifying lymph nodes are high or low risk for metastasis. Accuracy is anticipated to be fair. Discussion: Establishing a machine learning model with high specificity and accuracy as anticipated demonstrates the ability to provide additional factors in determining management of invasive breast cancer using ultrasonography. Establishing a reliable model can assist in determining the need for invasive or noninvasive procedures such as core needle lymph node biopsies, sentinel lymph node biopsies, lymph node resection, or radiation therapy. Future work can improve upon limitations, particularly variation among ultrasound operators and the study’s retrospective nature

    Ultrasonographic risk stratification of indeterminate thyroid nodules; a comparison of an artificial intelligence algorithm with radiologist performance

    Get PDF
    Background, Motivation and Objective: Thyroid nodules with indeterminate or suspicious cytology are commonly encountered in clinical practice and their clinical management is controversial. Recently, genetical analysis of thyroid fine needle aspiration (FNAs) was implemented at some institutions to differentiate thyroid nodules as high and low risk based on the presence of certain oncogenes commonly associated with aggressive tumor behavior and poor patient outcomes. Our group recently detailed the performance of a machine-learning model based on ultrasonography images of thyroid nodules for the prediction of high and low risk mutations. This study evaluated the performance of a second-generation machine-learning algorithm incorporating both object detection analysis and image classification and subsequently compared performance against blinded radiologists. Statement of Contribution/Methods: This retrospective study was conducted at Thomas Jefferson University and included an evaluation of 262 thyroid nodules that underwent ultrasound imaging, ultrasound-guided FNA and next-generation sequencing (NGS) or surgical pathology after resection. An object detection and image classification model were employed to first identify the location of nodules and then to assess the malignancy. A Google cloud platform (AutoML Vision; Google LLC) was used for this purpose. Either NGS or surgical pathology was considered as reference standard upon availability. 211 nodules were used for model development and the unused 51 nodules for model testing. Diagnostic performance in 47 nodules for which pathology or NGS were available was compared to blinded reads by 3 radiologists and performance expressed as mean ± standard deviation %. Results/Discussion: The algorithm achieved positive predictive value (PPV) of 68.31% and sensitivity of 86.81% within the training model. The model was tested on images of 51 unused nodules and all 51 nodules were correctly located (100%). For risk stratification, the model demonstrated a sensitivity of 73.9%, specificity of 70.8%, positive predictive value (PPV) of 70.8%, negative predictive value (NPV) of 73.9% and overall accuracy of 66.7% in the 47 nodules. For comparison, the 3 radiologist performance in this same dataset demonstrated a sensitivity of, specificity of, PPV of, NPV of, and overall accuracy of This work demonstrates that a machine-learning algorithm using image classification performed similarly, if not slightly better than 3 experienced radiologists. Future research will focus on incorporating machine learning findings within radiologist interpretation to potentially improve diagnostic accuracy

    Incorporation of a Machine Learning Algorithm With Object Detection Within the Thyroid Imaging Reporting and Data System Improves the Diagnosis of Genetic Risk.

    Get PDF
    Background: The role of next generation sequencing (NGS) for identifying high risk mutations in thyroid nodules following fine needle aspiration (FNA) biopsy continues to grow. However, ultrasound diagnosis even using the American College of Radiology\u27s Thyroid Imaging Reporting and Data System (TI-RADS) has limited ability to stratify genetic risk. The purpose of this study was to incorporate an artificial intelligence (AI) algorithm of thyroid ultrasound with object detection within the TI-RADS scoring system to improve prediction of genetic risk in these nodules. Methods: Two hundred fifty-two nodules from 249 patients that underwent ultrasound imaging and ultrasound-guided FNA with NGS with or without resection were retrospectively selected for this study. A machine learning program (Google AutoML) was employed for both automated nodule identification and risk stratification. Two hundred one nodules were used for model training and 51 reserved for testing. Three blinded radiologists scored the images of the test set nodules using TI-RADS and assigned each nodule as high or low risk based on the presence of highly suspicious imaging features on TI-RADS (very hypoechoic, taller-than-wide, extra-thyroidal extension, punctate echogenic foci). Subsequently, the TI-RADS classification was modified to incorporate AI for T4 nodules while treating T1-3 as low risk and T5 as high risk. All diagnostic predictions were compared to the presence of a high-risk mutation and pathology when available. Results: The AI algorithm correctly located all nodules in the test dataset (100% object detection). The model predicted the malignancy risk with a sensitivity of 73.9%, specificity of 70.8%, positive predictive value (PPV) of 70.8%, negative predictive value (NPV) of 73.9% and accuracy of 72.4% during the testing. The radiologists performed with a sensitivity of 52.1 ± 4.4%, specificity of 65.2 ± 6.4%, PPV of 59.1 ± 3.5%, NPV of 58.7 ± 1.8%, and accuracy of 58.8 ± 2.5% when using TI-RADS and sensitivity of 53.6 ± 17.6% (p=0.87), specificity of 83.3 ± 7.2% (p=0.06), PPV of 75.7 ± 8.5% (p=0.13), NPV of 66.0 ± 8.8% (p=0.31), and accuracy of 68.7 ± 7.4% (p=0.21) when using AI-modified TI-RADS. Conclusions: Incorporation of AI into TI-RADS improved radiologist performance and showed better malignancy risk prediction than AI alone when classifying thyroid nodules. Employing AI in existing thyroid nodule classification systems may help more accurately identifying high-risk nodules
    corecore