2 research outputs found

    A stronger baseline for automatic Pfirrmann grading of lumbar spine MRI using deep learning

    No full text
    Abstract This paper addresses the challenge of grading visual features in lumbar spine MRI using Deep Learning. Such a method is essential for the automatic quantification of structural changes in the spine, which is valuable for understanding low back pain. Multiple recent studies investigated different architecture designs, and the most recent success has been attributed to the use of transformer architectures. In this work, we argue that with a well-tuned three-stage pipeline comprising semantic segmentation, localization, and classification, convolutional networks outperform the state-of-the-art approaches. We conducted an ablation study of the existing methods in a population cohort, and report performance generalization across various subgroups. Our code is publicly available to advance research on disc degeneration and low back pain

    External validation of SpineNet, an open-source deep learning model for grading lumbar disk degeneration MRI features, using the Northern Finland Birth Cohort 1966

    No full text
    Abstract Study Design. This is a retrospective observational study to externally validate a deep learning image classification model. Objective. Deep learning models such as SpineNet offer the possibility of automating the process of disk degeneration (DD) classification from magnetic resonance imaging (MRI). External validation is an essential step to their development. The aim of this study was to externally validate SpineNet predictions for DD using Pfirrmann classification and Modic changes (MCs) on data from the Northern Finland Birth Cohort 1966 (NFBC1966). Summary of Data. We validated SpineNet using data from 1331 NFBC1966 participants for whom both lumbar spine MRI data and consensus DD gradings were available. Materials and Methods. SpineNet returned Pfirrmann grade and MC presence from T2-weighted sagittal lumbar MRI sequences from NFBC1966, a data set geographically and temporally separated from its training data set. A range of agreement and reliability metrics were used to compare predictions with expert radiologists. Subsets of data that match SpineNet training data more closely were also tested. Results. Balanced accuracy for DD was 78% (77%–79%) and for MC 86% (85%–86%). Interrater reliability for Pfirrmann grading was Lin concordance correlation coefficient=0.86 (0.85–0.87) and Cohen κ=0.68 (0.67–0.69). In a low back pain subset, these reliability metrics remained largely unchanged. In total, 20.83% of disks were rated differently by SpineNet compared with the human raters, but only 0.85% of disks had a grade difference >1. Interrater reliability for MC detection was κ=0.74 (0.72–0.75). In the low back pain subset, this metric was almost unchanged at κ=0.76 (0.73–0.79). Conclusions. In this study, SpineNet has been benchmarked against expert human raters in the research setting. It has matched human reliability and demonstrates robust performance despite the multiple challenges facing model generalizability
    corecore