578 research outputs found
Fully Automatic Segmentation of Lumbar Vertebrae from CT Images using Cascaded 3D Fully Convolutional Networks
We present a method to address the challenging problem of segmentation of
lumbar vertebrae from CT images acquired with varying fields of view. Our
method is based on cascaded 3D Fully Convolutional Networks (FCNs) consisting
of a localization FCN and a segmentation FCN. More specifically, in the first
step we train a regression 3D FCN (we call it "LocalizationNet") to find the
bounding box of the lumbar region. After that, a 3D U-net like FCN (we call it
"SegmentationNet") is then developed, which after training, can perform a
pixel-wise multi-class segmentation to map a cropped lumber region volumetric
data to its volume-wise labels. Evaluated on publicly available datasets, our
method achieved an average Dice coefficient of 95.77 0.81% and an average
symmetric surface distance of 0.37 0.06 mm.Comment: 5 pages and 5 figure
Automatic Segmentation, Localization, and Identification of Vertebrae in 3D CT Images Using Cascaded Convolutional Neural Networks
This paper presents a method for automatic segmentation, localization, and
identification of vertebrae in arbitrary 3D CT images. Many previous works do
not perform the three tasks simultaneously even though requiring a priori
knowledge of which part of the anatomy is visible in the 3D CT images. Our
method tackles all these tasks in a single multi-stage framework without any
assumptions. In the first stage, we train a 3D Fully Convolutional Networks to
find the bounding boxes of the cervical, thoracic, and lumbar vertebrae. In the
second stage, we train an iterative 3D Fully Convolutional Networks to segment
individual vertebrae in the bounding box. The input to the second networks have
an auxiliary channel in addition to the 3D CT images. Given the segmented
vertebra regions in the auxiliary channel, the networks output the next
vertebra. The proposed method is evaluated in terms of segmentation,
localization, and identification accuracy with two public datasets of 15 3D CT
images from the MICCAI CSI 2014 workshop challenge and 302 3D CT images with
various pathologies introduced in [1]. Our method achieved a mean Dice score of
96%, a mean localization error of 8.3 mm, and a mean identification rate of
84%. In summary, our method achieved better performance than all existing works
in all the three metrics
SpineCLUE: Automatic Vertebrae Identification Using Contrastive Learning and Uncertainty Estimation
Vertebrae identification in arbitrary fields-of-view plays a crucial role in
diagnosing spine disease. Most spine CT contain only local regions, such as the
neck, chest, and abdomen. Therefore, identification should not depend on
specific vertebrae or a particular number of vertebrae being visible. Existing
methods at the spine-level are unable to meet this challenge. In this paper, we
propose a three-stage method to address the challenges in 3D CT vertebrae
identification at vertebrae-level. By sequentially performing the tasks of
vertebrae localization, segmentation, and identification, the anatomical prior
information of the vertebrae is effectively utilized throughout the process.
Specifically, we introduce a dual-factor density clustering algorithm to
acquire localization information for individual vertebra, thereby facilitating
subsequent segmentation and identification processes. In addition, to tackle
the issue of interclass similarity and intra-class variability, we pre-train
our identification network by using a supervised contrastive learning method.
To further optimize the identification results, we estimated the uncertainty of
the classification network and utilized the message fusion module to combine
the uncertainty scores, while aggregating global information about the spine.
Our method achieves state-of-the-art results on the VerSe19 and VerSe20
challenge benchmarks. Additionally, our approach demonstrates outstanding
generalization performance on an collected dataset containing a wide range of
abnormal cases
Three-dimensional Segmentation of the Scoliotic Spine from MRI using Unsupervised Volume-based MR-CT Synthesis
Vertebral bone segmentation from magnetic resonance (MR) images is a
challenging task. Due to the inherent nature of the modality to emphasize soft
tissues of the body, common thresholding algorithms are ineffective in
detecting bones in MR images. On the other hand, it is relatively easier to
segment bones from CT images because of the high contrast between bones and the
surrounding regions. For this reason, we perform a cross-modality synthesis
between MR and CT domains for simple thresholding-based segmentation of the
vertebral bones. However, this implicitly assumes the availability of paired
MR-CT data, which is rare, especially in the case of scoliotic patients. In
this paper, we present a completely unsupervised, fully three-dimensional (3D)
cross-modality synthesis method for segmenting scoliotic spines. A 3D CycleGAN
model is trained for an unpaired volume-to-volume translation across MR and CT
domains. Then, the Otsu thresholding algorithm is applied to the synthesized CT
volumes for easy segmentation of the vertebral bones. The resulting
segmentation is used to reconstruct a 3D model of the spine. We validate our
method on 28 scoliotic vertebrae in 3 patients by computing the
point-to-surface mean distance between the landmark points for each vertebra
obtained from pre-operative X-rays and the surface of the segmented vertebra.
Our study results in a mean error of 3.41 1.06 mm. Based on qualitative
and quantitative results, we conclude that our method is able to obtain a good
segmentation and 3D reconstruction of scoliotic spines, all after training from
unpaired data in an unsupervised manner.Comment: To appear in the Proceedings of the SPIE Medical Imaging Conference
2021, San Diego, CA. 9 pages, 4 figures in tota
Multi-View Vertebra Localization and Identification from CT Images
Accurately localizing and identifying vertebrae from CT images is crucial for
various clinical applications. However, most existing efforts are performed on
3D with cropping patch operation, suffering from the large computation costs
and limited global information. In this paper, we propose a multi-view vertebra
localization and identification from CT images, converting the 3D problem into
a 2D localization and identification task on different views. Without the
limitation of the 3D cropped patch, our method can learn the multi-view global
information naturally. Moreover, to better capture the anatomical structure
information from different view perspectives, a multi-view contrastive learning
strategy is developed to pre-train the backbone. Additionally, we further
propose a Sequence Loss to maintain the sequential structure embedded along the
vertebrae. Evaluation results demonstrate that, with only two 2D networks, our
method can localize and identify vertebrae in CT images accurately, and
outperforms the state-of-the-art methods consistently. Our code is available at
https://github.com/ShanghaiTech-IMPACT/Multi-View-Vertebra-Localization-and-Identification-from-CT-Images.Comment: MICCAI 202
- …