4 research outputs found
Automating Cobb Angle Measurement for Adolescent Idiopathic Scoliosis using Instance Segmentation
Scoliosis is a three-dimensional deformity of the spine, most often diagnosed
in childhood. It affects 2-3% of the population, which is approximately seven
million people in North America. Currently, the reference standard for
assessing scoliosis is based on the manual assignment of Cobb angles at the
site of the curvature center. This manual process is time consuming and
unreliable as it is affected by inter- and intra-observer variance. To overcome
these inaccuracies, machine learning (ML) methods can be used to automate the
Cobb angle measurement process. This paper proposes to address the Cobb angle
measurement task using YOLACT, an instance segmentation model. The proposed
method first segments the vertebrae in an X-Ray image using YOLACT, then it
tracks the important landmarks using the minimum bounding box approach. Lastly,
the extracted landmarks are used to calculate the corresponding Cobb angles.
The model achieved a Symmetric Mean Absolute Percentage Error (SMAPE) score of
10.76%, demonstrating the reliability of this process in both vertebra
localization and Cobb angle measurement
Unsupervised domain adaptation for vertebrae detection and identification in 3D CT volumes using a domain sanity loss
A variety of medical computer vision applications analyze 2D slices of computed tomography (CT) scans, whereas axial slices from the body trunk region are usually identified based on their relative position to the spine. A limitation of such systems is that either the correct slices must be extracted manually or labels of the vertebrae are required for each CT scan to develop an automated extraction system. In this paper, we propose an unsupervised domain adaptation (UDA) approach for vertebrae detection and identification based on a novel Domain Sanity Loss (DSL) function. With UDA the model’s knowledge learned on a publicly available (source) data set can be transferred to the target domain without using target labels, where the target domain is defined by the specific setup (CT modality, study protocols, applied pre- and processing) at the point of use (e.g., a specific clinic with its specific CT study protocols). With our approach, a model is trained on the source and target data set in parallel. The model optimizes a supervised loss for labeled samples from the source domain and the DSL loss function based on domain-specific “sanity checks” for samples from the unlabeled target domain. Without using labels from the target domain, we are able to identify vertebra centroids with an accuracy of 72.8%. By adding only ten target labels during training the accuracy increases to 89.2%, which is on par with the current state-of-the-art for full supervised learning, while using about 20 times less labels. Thus, our model can be used to extract 2D slices from 3D CT scans on arbitrary data sets fully automatically without requiring an extensive labeling effort, contributing to the clinical adoption of medical imaging by hospitals