519 research outputs found

    A Deep Learning-Based Method for Automatic Segmentation of Proximal Femur from Quantitative Computed Tomography Images

    Full text link
    Purpose: Proximal femur image analyses based on quantitative computed tomography (QCT) provide a method to quantify the bone density and evaluate osteoporosis and risk of fracture. We aim to develop a deep-learning-based method for automatic proximal femur segmentation. Methods and Materials: We developed a 3D image segmentation method based on V-Net, an end-to-end fully convolutional neural network (CNN), to extract the proximal femur QCT images automatically. The proposed V-net methodology adopts a compound loss function, which includes a Dice loss and a L2 regularizer. We performed experiments to evaluate the effectiveness of the proposed segmentation method. In the experiments, a QCT dataset which included 397 QCT subjects was used. For the QCT image of each subject, the ground truth for the proximal femur was delineated by a well-trained scientist. During the experiments for the entire cohort then for male and female subjects separately, 90% of the subjects were used in 10-fold cross-validation for training and internal validation, and to select the optimal parameters of the proposed models; the rest of the subjects were used to evaluate the performance of models. Results: Visual comparison demonstrated high agreement between the model prediction and ground truth contours of the proximal femur portion of the QCT images. In the entire cohort, the proposed model achieved a Dice score of 0.9815, a sensitivity of 0.9852 and a specificity of 0.9992. In addition, an R2 score of 0.9956 (p<0.001) was obtained when comparing the volumes measured by our model prediction with the ground truth. Conclusion: This method shows a great promise for clinical application to QCT and QCT-based finite element analysis of the proximal femur for evaluating osteoporosis and hip fracture risk

    ST-V-Net: incorporating shape prior into convolutional neural networks for proximal femur segmentation

    Get PDF
    We aim to develop a deep-learning-based method for automatic proximal femur segmentation in quantitative computed tomography (QCT) images. We proposed a spatial transformation V-Net (ST-V-Net), which contains a V-Net and a spatial transform network (STN) to extract the proximal femur from QCT images. The STN incorporates a shape prior into the segmentation network as a constraint and guidance for model training, which improves model performance and accelerates model convergence. Meanwhile, a multi-stage training strategy is adopted to fine-tune the weights of the ST-V-Net. We performed experiments using a QCT dataset which included 397 QCT subjects. During the experiments for the entire cohort and then for male and female subjects separately, 90% of the subjects were used in ten-fold stratified cross-validation for training and the rest of the subjects were used to evaluate the performance of models. In the entire cohort, the proposed model achieved a Dice similarity coefficient (DSC) of 0.9888, a sensitivity of 0.9966 and a specificity of 0.9988. Compared with V-Net, the Hausdorff distance was reduced from 9.144 to 5.917 mm, and the average surface distance was reduced from 0.012 to 0.009 mm using the proposed ST-V-Net. Quantitative evaluation demonstrated excellent performance of the proposed ST-V-Net for automatic proximal femur segmentation in QCT images. In addition, the proposed ST-V-Net sheds light on incorporating shape prior to segmentation to further improve the model performance

    ST-V-Net: Incorporating Shape Prior Into Convolutional Neural Netwoks For Proximal Femur Segmentation

    Get PDF
    We aim to develop a deep-learning-based method for automatic proximal femur segmentation in quantitative computed tomography (QCT) images. We proposed a spatial transformation V-Net (ST-V-Net), which contains a V-Net and a spatial transform network (STN) to extract the proximal femur from QCT images. The STN incorporates a shape prior into the segmentation network as a constraint and guidance for model training, which improves model performance and accelerates model convergence. Meanwhile, a multi-stage training strategy is adopted to fine-tune the weights of the ST-V-Net. We performed experiments using a QCT dataset which included 397 QCT subjects. During the experiments for the entire cohort and then for male and female subjects separately, 90% of the subjects were used in ten-fold stratified cross-validation for training and the rest of the subjects were used to evaluate the performance of models. In the entire cohort, the proposed model achieved a Dice similarity coefficient (DSC) of 0.9888, a sensitivity of 0.9966 and a specificity of 0.9988. Compared with V-Net, the Hausdorff distance was reduced from 9.144 to 5.917 mm, and the average surface distance was reduced from 0.012 to 0.009 mm using the proposed ST-V-Net. Quantitative evaluation demonstrated excellent performance of the proposed ST-V-Net for automatic proximal femur segmentation in QCT images. In addition, the proposed ST-V-Net sheds light on incorporating shape prior to segmentation to further improve the model performance

    Fast and Robust Femur Segmentation from Computed Tomography Images for Patient-Specific Hip Fracture Risk Screening

    Full text link
    Osteoporosis is a common bone disease that increases the risk of bone fracture. Hip-fracture risk screening methods based on finite element analysis depend on segmented computed tomography (CT) images; however, current femur segmentation methods require manual delineations of large data sets. Here we propose a deep neural network for fully automated, accurate, and fast segmentation of the proximal femur from CT. Evaluation on a set of 1147 proximal femurs with ground truth segmentations demonstrates that our method is apt for hip-fracture risk screening, bringing us one step closer to a clinically viable option for screening at-risk patients for hip-fracture susceptibility.Comment: This article has been accepted for publication in Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, published by Taylor & Franci

    Advances in FAI Imaging: a Focused Review

    Get PDF
    Purpose of review: Femoroacetabular impingement (FAI) is one of the main causes of hip pain in young adults and poses clinical challenges which have placed it at the forefront of imaging and orthopedics. Diagnostic hip imaging has dramatically changed in the past years, with the arrival of new imaging techniques and the development of magnetic resonance imaging (MRI). This article reviews the current state-of-the-art clinical routine of individuals with suspected FAI, limitations, and future directions that show promise in the field of musculoskeletal research and are likely to reshape hip imaging in the coming years. Recent findings: The largely unknown natural disease course, especially in hips with FAI syndrome and those with asymptomatic abnormal morphologies, continues to be a problem as far as diagnosis, treatment, and prognosis are concerned. There has been a paradigm shift in recent years from bone and soft tissue morphological analysis towards the tentative development of quantitative approaches, biochemical cartilage evaluation, dynamic assessment techniques and, finally, integration of artificial intelligence (AI)/deep learning systems. Imaging, AI, and hip preserving care will continue to evolve with new problems and greater challenges. The increasing number of analytic parameters describing the hip joint, as well as new sophisticated MRI and imaging analysis, have carried practitioners beyond simplistic classifications. Reliable evidence-based guidelines, beyond differentiation into pure instability or impingement, are paramount to refine the diagnostic algorithm and define treatment indications and prognosis. Nevertheless, the boundaries of morphological, functional, and AI-aided hip assessment are gradually being pushed to new frontiers as the role of musculoskeletal imaging is rapidly evolving.info:eu-repo/semantics/publishedVersio
    corecore