26 research outputs found
Detecting Anterior Cruciate Ligament Tears and Posterolateral Corner Injuries on Magnetic Resonance Imaging
Introduction: Anterior Cruciate Ligament (ACL) tears are an extremely common orthopedic injury, with an incidence ranging from 39-52 per 100,000. Knee Magnetic Resonance Imaging (MRI) scans are the gold standard for diagnosing ACL tears and their comorbidities, such as posterolateral corner injuries; the results of these scans determine the appropriate treatment needed for patients. There is evidence that machine learning can be used to automate the detection of pathology on MRI, and we hypothesize that we can train a neural network machine learning model to accurately interpret ACL injuries and posterolateral corner injuries.
Methods: We will be analyzing over 1000 knee MRIs including those that are normal, those with ACL tears, and those with ACL tears with posterolateral corner injuries. First, we will manually annotate the knee MRIs to classify them appropriately. We will then train a convoluted neural network machine learning model on ~80% of the data, and use the remaining ~20% to test its accuracy. We will compare the accuracy of our model to the accuracy of musculoskeletal radiologists.
Results: We anticipate that our model will have comparable accuracy predicting ACL tears and posterolateral corner injuries to that of musculoskeletal radiologists. By having access to our model’s predictions, we expect radiologists will be able to detect ACL tears with posterolateral corner injuries with improved accuracy and speed.
Discussion: While we do not have results yet, we anticipate that our model will be an early step to developing useful tools that aid radiologists. Our model will be trained on a large dataset which will increase its generalizability for future implementation. Radiologists can use our model’s predictions to aid them in diagnosis of pathology on knee MRI. We expect that improved diagnosis will improve patient treatment outcomes
Effects of Vitamin D Supplementation on a Deep Learning-Based Mammographic Evaluation in SWOG S0812
Deep learning-based mammographic evaluations could noninvasively assess response to breast cancer chemoprevention. We evaluated change in a convolutional neural network-based breast cancer risk model applied to mammograms among women enrolled in SWOG S0812, which randomly assigned 208 premenopausal high-risk women to receive oral vitamin D3 20 000 IU weekly or placebo for 12 months. We applied the convolutional neural network model to mammograms collected at baseline (n = 109), 12 months (n = 97), and 24 months (n = 67) and compared changes in convolutional neural network-based risk score between treatment groups. Change in convolutional neural network-based risk score was not statistically significantly different between vitamin D and placebo groups at 12 months (0.005 vs 0.002, P = .875) or at 24 months (0.020 vs 0.001, P = .563). The findings are consistent with the primary analysis of S0812, which did not demonstrate statistically significant changes in mammographic density with vitamin D supplementation compared with placebo. There is an ongoing need to evaluate biomarkers of response to novel breast cancer chemopreventive agents
Recommended from our members
A novel CNN algorithm for pathological complete response prediction using an I-SPY TRIAL breast MRI database
PurposeTo apply our convolutional neural network (CNN) algorithm to predict neoadjuvant chemotherapy (NAC) response using the I-SPY TRIAL breast MRI dataset.MethodsFrom the I-SPY TRIAL breast MRI database, 131 patients from 9 institutions were successfully downloaded for analysis. First post-contrast MRI images were used for 3D segmentation using 3D slicer. Our CNN was implemented entirely of 3 × 3 convolutional kernels and linear layers. The convolutional kernels consisted of 6 residual layers, totaling 12 convolutional layers. Dropout with a 0.5 keep probability and L2 normalization was utilized. Training was implemented by using the Adam optimizer. A 5-fold cross validation was used for performance evaluation. Software code was written in Python using the TensorFlow module on a Linux workstation with one NVidia Titan X GPU.ResultsOf 131 patients, 40 patients achieved pCR following NAC (group 1) and 91 patients did not achieve pCR following NAC (group 2). Diagnostic accuracy of our CNN two classification model distinguishing patients with pCR vs non-pCR was 72.5 (SD ± 8.4), with sensitivity 65.5% (SD ± 28.1) and specificity of 78.9% (SD ± 15.2). The area under a ROC Curve (AUC) was 0.72 (SD ± 0.08).ConclusionIt is feasible to use our CNN algorithm to predict NAC response in patients using a multi-institution dataset
Recommended from our members
Dynamic Changes of Convolutional Neural Network-based Mammographic Breast Cancer Risk Score Among Women Undergoing Chemoprevention Treatment.
INTRODUCTION: We investigated whether our convolutional neural network (CNN)-based breast cancer risk model is modifiable by testing it on women who had undergone risk-reducing chemoprevention treatment. MATERIALS AND METHODS: We conducted a retrospective cohort study of patients diagnosed with atypical hyperplasia, lobular carcinoma in situ, or ductal carcinoma in situ at our institution from 2007 to 2015. The clinical characteristics, chemoprevention use, and mammography images were extracted from the electronic health records. We classified two groups according to chemoprevention use. Mammograms were performed at baseline and subsequent follow-up evaluations for input to our CNN risk model. The 2 chemoprevention groups were compared for the risk score change from baseline to follow-up. The change categories included stayed high risk, stayed low risk, increased from low to high risk, and decreased from high to low risk. Unordered polytomous regression models were used for statistical analysis, with P < .05 considered statistically significant. RESULTS: Of 541 patients, 184 (34%) had undergone chemoprevention treatment (group 1) and 357 (66%) had not (group 2). Using our CNN breast cancer risk score, significantly more women in group 1 had shown a decrease in breast cancer risk compared with group 2 (33.7% vs. 22.9%; P < .01). Significantly fewer women in group 1 had an increase in breast cancer risk compared with group 2 (11.4% vs. 20.2%; P < .01). On multivariate analysis, an increase in breast cancer risk predicted by our model correlated negatively with the use of chemoprevention treatment (P = .02). CONCLUSIONS: Our CNN-based breast cancer risk score is modifiable with potential utility in assessing the efficacy of known chemoprevention agents and testing new chemoprevention strategies
Recommended from our members
Deep Learning-Assisted Identification of Femoroacetabular Impingement (FAI) on Routine Pelvic Radiographs
To use a novel deep learning system to localize the hip joints and detect findings of cam-type femoroacetabular impingement (FAI). A retrospective search of hip/pelvis radiographs obtained in patients to evaluate for FAI yielded 3050 total studies. Each hip was classified separately by the original interpreting radiologist in the following manner: 724 hips had severe cam-type FAI morphology, 962 moderate cam-type FAI morphology, 846 mild cam-type FAI morphology, and 518 hips were normal. The anteroposterior (AP) view from each study was anonymized and extracted. After localization of the hip joints by a novel convolutional neural network (CNN) based on the focal loss principle, a second CNN classified the images of the hip as cam positive, or no FAI. Accuracy was 74% for diagnosing normal vs. abnormal cam-type FAI morphology, with aggregate sensitivity and specificity of 0.821 and 0.669, respectively, at the chosen operating point. The aggregate AUC was 0.736. A deep learning system can be applied to detect FAI-related changes on single view pelvic radiographs. Deep learning is useful for quickly identifying and categorizing pathology on imaging, which may aid the interpreting radiologist
Recommended from our members
Deep Learning–Assisted Identification of Femoroacetabular Impingement (FAI) on Routine Pelvic Radiographs
Prospective Analysis Using a Novel CNN Algorithm to Distinguish Atypical Ductal Hyperplasia From Ductal Carcinoma in Situ in Breast.
INTRODUCTION: We previously developed a convolutional neural networks (CNN)-based algorithm to distinguish atypical ductal hyperplasia (ADH) from ductal carcinoma in situ (DCIS) using a mammographic dataset. The purpose of this study is to further validate our CNN algorithm by prospectively analyzing an unseen new dataset to evaluate the diagnostic performance of our algorithm. MATERIALS AND METHODS: In this institutional review board-approved study, a new dataset composed of 280 unique mammographic images from 140 patients was used to test our CNN algorithm. All patients underwent stereotactic-guided biopsy of calcifications and underwent surgical excision with available final pathology. The ADH group consisted of 122 images from 61 patients with the highest pathology diagnosis of ADH. The DCIS group consisted of 158 images from 79 patients with the highest pathology diagnosis of DCIS. Two standard mammographic magnification views (craniocaudal and mediolateral/lateromedial) of the calcifications were used for analysis. Calcifications were segmented using an open source software platform 3D slicer and resized to fit a 128 × 128 pixel bounding box. Our previously developed CNN algorithm was used. Briefly, a 15 hidden layer topology was used. The network architecture contained 5 residual layers and dropout of 0.25 after each convolution. Diagnostic performance metrics were analyzed including sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve. The positive class was defined as the pure ADH group in this study and thus specificity represents minimizing the amount of falsely labeled pure ADH cases. RESULTS: Area under the receiver operating characteristic curve was 0.90 (95% confidence interval, ± 0.04). Diagnostic accuracy, sensitivity, and specificity was 80.7%, 63.9%, and 93.7%, respectively. CONCLUSION: Prospectively tested on new unseen data, our CNN algorithm distinguished pure ADH from DCIS using mammographic images with high specificity