26 research outputs found
A Deep Learning-Based Method for Automatic Segmentation of Proximal Femur from Quantitative Computed Tomography Images
Purpose: Proximal femur image analyses based on quantitative computed
tomography (QCT) provide a method to quantify the bone density and evaluate
osteoporosis and risk of fracture. We aim to develop a deep-learning-based
method for automatic proximal femur segmentation. Methods and Materials: We
developed a 3D image segmentation method based on V-Net, an end-to-end fully
convolutional neural network (CNN), to extract the proximal femur QCT images
automatically. The proposed V-net methodology adopts a compound loss function,
which includes a Dice loss and a L2 regularizer. We performed experiments to
evaluate the effectiveness of the proposed segmentation method. In the
experiments, a QCT dataset which included 397 QCT subjects was used. For the
QCT image of each subject, the ground truth for the proximal femur was
delineated by a well-trained scientist. During the experiments for the entire
cohort then for male and female subjects separately, 90% of the subjects were
used in 10-fold cross-validation for training and internal validation, and to
select the optimal parameters of the proposed models; the rest of the subjects
were used to evaluate the performance of models. Results: Visual comparison
demonstrated high agreement between the model prediction and ground truth
contours of the proximal femur portion of the QCT images. In the entire cohort,
the proposed model achieved a Dice score of 0.9815, a sensitivity of 0.9852 and
a specificity of 0.9992. In addition, an R2 score of 0.9956 (p<0.001) was
obtained when comparing the volumes measured by our model prediction with the
ground truth. Conclusion: This method shows a great promise for clinical
application to QCT and QCT-based finite element analysis of the proximal femur
for evaluating osteoporosis and hip fracture risk
The International Workshop on Osteoarthritis Imaging Knee MRI Segmentation Challenge: A Multi-Institute Evaluation and Analysis Framework on a Standardized Dataset
Purpose: To organize a knee MRI segmentation challenge for characterizing the
semantic and clinical efficacy of automatic segmentation methods relevant for
monitoring osteoarthritis progression.
Methods: A dataset partition consisting of 3D knee MRI from 88 subjects at
two timepoints with ground-truth articular (femoral, tibial, patellar)
cartilage and meniscus segmentations was standardized. Challenge submissions
and a majority-vote ensemble were evaluated using Dice score, average symmetric
surface distance, volumetric overlap error, and coefficient of variation on a
hold-out test set. Similarities in network segmentations were evaluated using
pairwise Dice correlations. Articular cartilage thickness was computed per-scan
and longitudinally. Correlation between thickness error and segmentation
metrics was measured using Pearson's coefficient. Two empirical upper bounds
for ensemble performance were computed using combinations of model outputs that
consolidated true positives and true negatives.
Results: Six teams (T1-T6) submitted entries for the challenge. No
significant differences were observed across all segmentation metrics for all
tissues (p=1.0) among the four top-performing networks (T2, T3, T4, T6). Dice
correlations between network pairs were high (>0.85). Per-scan thickness errors
were negligible among T1-T4 (p=0.99) and longitudinal changes showed minimal
bias (<0.03mm). Low correlations (<0.41) were observed between segmentation
metrics and thickness error. The majority-vote ensemble was comparable to top
performing networks (p=1.0). Empirical upper bound performances were similar
for both combinations (p=1.0).
Conclusion: Diverse networks learned to segment the knee similarly where high
segmentation accuracy did not correlate to cartilage thickness accuracy. Voting
ensembles did not outperform individual networks but may help regularize
individual models.Comment: Submitted to Radiology: Artificial Intelligence; Fixed typo
Semi-Automated Labelling of Cystoid Macular Edema in OCT Scans
The analysis of retinal Spectral-Domain Optical Coherence Tomography (SD-OCT)
images by trained medical professionals can be used to provide useful insights into
various diseases. It is the most popular method of retinal imaging due to its non-invasive
nature and the useful information it provides for making an accurate diagnosis. A deep
learning approach for automating the segmentation of Cystoid Macular Edema (fluid)
in retinal OCT B-Scan images was developed that is consequently used for volumetric
analysis of OCT scans. This solution is a fast and accurate semantic segmentation network
that makes use of a shortened encoder-decoder UNet like architecture with an integrated
Dense ASPP module and Attention Gate for producing an accurate and refined retinal
fluid segmentation map. Our system was evaluated against both publicly and privately
available datasets; on the former the network achieved a Dice coefficient of 0.804, thus
making it the current best performing approach on this dataset, and on the very small
and challenging private dataset, it achieved a score of 0.691. Due to the lack of publicly
available data in this domain, a Graphical User Interface that aims to semi-automate the
labelling process of OCT images was also created, thus greatly simplifying the process of
the dataset creation and potentially leading to an increase in labelled data production
ST-V-Net: Incorporating Shape Prior Into Convolutional Neural Netwoks For Proximal Femur Segmentation
We aim to develop a deep-learning-based method for automatic proximal femur segmentation in quantitative computed tomography (QCT) images. We proposed a spatial transformation V-Net (ST-V-Net), which contains a V-Net and a spatial transform network (STN) to extract the proximal femur from QCT images. The STN incorporates a shape prior into the segmentation network as a constraint and guidance for model training, which improves model performance and accelerates model convergence. Meanwhile, a multi-stage training strategy is adopted to fine-tune the weights of the ST-V-Net. We performed experiments using a QCT dataset which included 397 QCT subjects. During the experiments for the entire cohort and then for male and female subjects separately, 90% of the subjects were used in ten-fold stratified cross-validation for training and the rest of the subjects were used to evaluate the performance of models. In the entire cohort, the proposed model achieved a Dice similarity coefficient (DSC) of 0.9888, a sensitivity of 0.9966 and a specificity of 0.9988. Compared with V-Net, the Hausdorff distance was reduced from 9.144 to 5.917 mm, and the average surface distance was reduced from 0.012 to 0.009 mm using the proposed ST-V-Net. Quantitative evaluation demonstrated excellent performance of the proposed ST-V-Net for automatic proximal femur segmentation in QCT images. In addition, the proposed ST-V-Net sheds light on incorporating shape prior to segmentation to further improve the model performance
ST-V-Net: incorporating shape prior into convolutional neural networks for proximal femur segmentation
We aim to develop a deep-learning-based method for automatic proximal femur segmentation in quantitative computed tomography (QCT) images. We proposed a spatial transformation V-Net (ST-V-Net), which contains a V-Net and a spatial transform network (STN) to extract the proximal femur from QCT images. The STN incorporates a shape prior into the segmentation network as a constraint and guidance for model training, which improves model performance and accelerates model convergence. Meanwhile, a multi-stage training strategy is adopted to fine-tune the weights of the ST-V-Net. We performed experiments using a QCT dataset which included 397 QCT subjects. During the experiments for the entire cohort and then for male and female subjects separately, 90% of the subjects were used in ten-fold stratified cross-validation for training and the rest of the subjects were used to evaluate the performance of models. In the entire cohort, the proposed model achieved a Dice similarity coefficient (DSC) of 0.9888, a sensitivity of 0.9966 and a specificity of 0.9988. Compared with V-Net, the Hausdorff distance was reduced from 9.144 to 5.917 mm, and the average surface distance was reduced from 0.012 to 0.009 mm using the proposed ST-V-Net. Quantitative evaluation demonstrated excellent performance of the proposed ST-V-Net for automatic proximal femur segmentation in QCT images. In addition, the proposed ST-V-Net sheds light on incorporating shape prior to segmentation to further improve the model performance