24 research outputs found

    Lumbar spine segmentation in MR images: a dataset and a public benchmark

    Full text link
    This paper presents a large publicly available multi-center lumbar spine magnetic resonance imaging (MRI) dataset with reference segmentations of vertebrae, intervertebral discs (IVDs), and spinal canal. The dataset includes 447 sagittal T1 and T2 MRI series from 218 patients with a history of low back pain. It was collected from four different hospitals and was divided into a training (179 patients) and validation (39 patients) set. An iterative data annotation approach was used by training a segmentation algorithm on a small part of the dataset, enabling semi-automatic segmentation of the remaining images. The algorithm provided an initial segmentation, which was subsequently reviewed, manually corrected, and added to the training data. We provide reference performance values for this baseline algorithm and nnU-Net, which performed comparably. We set up a continuous segmentation challenge to allow for a fair comparison of different segmentation algorithms. This study may encourage wider collaboration in the field of spine segmentation, and improve the diagnostic value of lumbar spine MRI

    Benchmarking Encoder-Decoder Architectures for Biplanar X-ray to 3D Shape Reconstruction

    Full text link
    Various deep learning models have been proposed for 3D bone shape reconstruction from two orthogonal (biplanar) X-ray images. However, it is unclear how these models compare against each other since they are evaluated on different anatomy, cohort and (often privately held) datasets. Moreover, the impact of the commonly optimized image-based segmentation metrics such as dice score on the estimation of clinical parameters relevant in 2D-3D bone shape reconstruction is not well known. To move closer toward clinical translation, we propose a benchmarking framework that evaluates tasks relevant to real-world clinical scenarios, including reconstruction of fractured bones, bones with implants, robustness to population shift, and error in estimating clinical parameters. Our open-source platform provides reference implementations of 8 models (many of whose implementations were not publicly available), APIs to easily collect and preprocess 6 public datasets, and the implementation of automatic clinical parameter and landmark extraction methods. We present an extensive evaluation of 8 2D-3D models on equal footing using 6 public datasets comprising images for four different anatomies. Our results show that attention-based methods that capture global spatial relationships tend to perform better across all anatomies and datasets; performance on clinically relevant subgroups may be overestimated without disaggregated reporting; ribs are substantially more difficult to reconstruct compared to femur, hip and spine; and the dice score improvement does not always bring a corresponding improvement in the automatic estimation of clinically relevant parameters.Comment: accepted to NeurIPS 202

    Differentiation of benign and malignant vertebral fractures using a convolutional neural network to extract CT-based texture features.

    Get PDF
    PURPOSE To assess the diagnostic performance of three-dimensional (3D) CT-based texture features (TFs) using a convolutional neural network (CNN)-based framework to differentiate benign (osteoporotic) and malignant vertebral fractures (VFs). METHODS A total of 409 patients who underwent routine thoracolumbar spine CT at two institutions were included. VFs were categorized as benign or malignant using either biopsy or imaging follow-up of at least three months as standard of reference. Automated detection, labelling, and segmentation of the vertebrae were performed using a CNN-based framework ( https://anduin.bonescreen.de ). Eight TFs were extracted: Varianceglobal, Skewnessglobal, energy, entropy, short-run emphasis (SRE), long-run emphasis (LRE), run-length non-uniformity (RLN), and run percentage (RP). Multivariate regression models adjusted for age and sex were used to compare TFs between benign and malignant VFs. RESULTS Skewnessglobal showed a significant difference between the two groups when analyzing fractured vertebrae from T1 to L6 (benign fracture group: 0.70 [0.64-0.76]; malignant fracture group: 0.59 [0.56-0.63]; and p = 0.017), suggesting a higher skewness in benign VFs compared to malignant VFs. CONCLUSION Three-dimensional CT-based global TF skewness assessed using a CNN-based framework showed significant difference between benign and malignant thoracolumbar VFs and may therefore contribute to the clinical diagnostic work-up of patients with VFs

    Multiclass Bone Segmentation of PET/CT Scans for Automatic SUV Extraction

    Get PDF
    In this thesis I present an automated framework for segmentation of bone structures from dual modality PET/CT scans and further extraction of SUV measurements. The first stage of this framework consists of a variant of the 3D U-Net architecture for segmentation of three bone structures: vertebral body, pelvis, and sternum. The dataset for this model consists of annotated slices from the CT scans retrieved from the study of post-HCST patients and the 18F-FLT radiotracer, which are undersampled volumes due to the low-dose radiation used during the scanning. The mean Dice scores obtained by the proposed model are 0.9162, 0.9163, and 0.8721 for the vertebral body, pelvis, and sternum class respectively. The next step of the proposed framework consists of identifying the individual vertebrae, which is a particularly difficult task due to the low resolution of the CT scans in the axial dimension. To address this issue, I present an iterative algorithm for instance segmentation of vertebral bodies, based on anatomical priors of the spine for detecting the starting point of a vertebra. The spatial information contained in the CT and PET scans is used to translate the resulting masks to the PET image space and extract SUV measurements. I then present a CNN model based on the DenseNet architecture that, for the first time, classifies the spatial distribution of SUV within the marrow cavities of the vertebral bodies as normal engraftment or possible relapse. With an AUC of 0.931 and an accuracy of 92% obtained on real patient data, this method shows good potential as a future automated tool to assist in monitoring the recovery process of HSCT patients

    Med-Query: Steerable Parsing of 9-DoF Medical Anatomies with Query Embedding

    Full text link
    Automatic parsing of human anatomies at instance-level from 3D computed tomography (CT) scans is a prerequisite step for many clinical applications. The presence of pathologies, broken structures or limited field-of-view (FOV) all can make anatomy parsing algorithms vulnerable. In this work, we explore how to exploit and conduct the prosperous detection-then-segmentation paradigm in 3D medical data, and propose a steerable, robust, and efficient computing framework for detection, identification, and segmentation of anatomies in CT scans. Considering complicated shapes, sizes and orientations of anatomies, without lose of generality, we present the nine degrees-of-freedom (9-DoF) pose estimation solution in full 3D space using a novel single-stage, non-hierarchical forward representation. Our whole framework is executed in a steerable manner where any anatomy of interest can be directly retrieved to further boost the inference efficiency. We have validated the proposed method on three medical imaging parsing tasks of ribs, spine, and abdominal organs. For rib parsing, CT scans have been annotated at the rib instance-level for quantitative evaluation, similarly for spine vertebrae and abdominal organs. Extensive experiments on 9-DoF box detection and rib instance segmentation demonstrate the effectiveness of our framework (with the identification rate of 97.0% and the segmentation Dice score of 90.9%) in high efficiency, compared favorably against several strong baselines (e.g., CenterNet, FCOS, and nnU-Net). For spine identification and segmentation, our method achieves a new state-of-the-art result on the public CTSpine1K dataset. Last, we report highly competitive results in multi-organ segmentation at FLARE22 competition. Our annotations, code and models will be made publicly available at: https://github.com/alibaba-damo-academy/Med_Query.Comment: updated versio

    Deformable Multisurface Segmentation of the Spine for Orthopedic Surgery Planning and Simulation

    Get PDF
    Purpose: We describe a shape-aware multisurface simplex deformable model for the segmentation of healthy as well as pathological lumbar spine in medical image data. Approach: This model provides an accurate and robust segmentation scheme for the identification of intervertebral disc pathologies to enable the minimally supervised planning and patient-specific simulation of spine surgery, in a manner that combines multisurface and shape statistics-based variants of the deformable simplex model. Statistical shape variation within the dataset has been captured by application of principal component analysis and incorporated during the segmentation process to refine results. In the case where shape statistics hinder detection of the pathological region, user assistance is allowed to disable the prior shape influence during deformation. Results: Results demonstrate validation against user-assisted expert segmentation, showing excellent boundary agreement and prevention of spatial overlap between neighboring surfaces. This section also plots the characteristics of the statistical shape model, such as compactness, generalizability and specificity, as a function of the number of modes used to represent the family of shapes. Final results demonstrate a proof-of-concept deformation application based on the open-source surgery simulation Simulation Open Framework Architecture toolkit. Conclusions: To summarize, we present a deformable multisurface model that embeds a shape statistics force, with applications to surgery planning and simulation

    Deep Reinforcement Learning in Medical Object Detection and Segmentation

    Get PDF
    Medical object detection and segmentation are crucial pre-processing steps in the clinical workflow for diagnosis and therapy planning. Although deep learning methods have achieved considerable performance in this field, they impose several shortcomings, such as computational limitations, sub-optimal parameter optimization, and weak generalization. Deep reinforcement learning as the newest artificial intelligence algorithm has great potential to address the limitation of traditional deep learning methods, as well as obtaining accurate detection and segmentation results. Deep reinforcement learning has a cognitive-like process to propose the area of desirable objects, thereby facilitating accurate object detection and segmentation. In this thesis, we deploy deep reinforcement learning into two challenging and representative medical object detection and segmentation tasks: 1) Sequential-Conditional Reinforcement Learning (SCRL) for vertebral body detection and segmentation by modeling the spine anatomy with deep reinforcement learning; 2) Weakly-Supervised Teacher-Student network (WSTS) for liver tumor segmentation from the non-enhanced image by transferring tumor knowledge from the enhanced image with deep reinforcement learning. The experiment indicates our methods are effective and outperform state-of-art deep learning methods. Therefore, this thesis improves object detection and segmentation accuracy and offers researchers a novel approach based on deep reinforcement learning in medical image analysis
    corecore