127 research outputs found

    Uncertainty-driven Forest Predictors for Vertebra Localization and Segmentation

    Get PDF
    Accurate localization, identification and segmentation of vertebrae is an important task in medical and biological image analysis. The prevailing approach to solve such a task is to first generate pixelindependent features for each vertebra, e.g. via a random forest predictor, which are then fed into an MRF-based objective to infer the optimal MAP solution of a constellation model. We abandon this static, twostage approach and mix feature generation with model-based inference in a new, more flexible, way. We evaluate our method on two data sets with different objectives. The first is semantic segmentation of a 21-part body plan of zebrafish embryos in microscopy images, and the second is localization and identification of vertebrae in benchmark human CT

    Unsupervised domain adaptation for vertebrae detection and identification in 3D CT volumes using a domain sanity loss

    Get PDF
    A variety of medical computer vision applications analyze 2D slices of computed tomography (CT) scans, whereas axial slices from the body trunk region are usually identified based on their relative position to the spine. A limitation of such systems is that either the correct slices must be extracted manually or labels of the vertebrae are required for each CT scan to develop an automated extraction system. In this paper, we propose an unsupervised domain adaptation (UDA) approach for vertebrae detection and identification based on a novel Domain Sanity Loss (DSL) function. With UDA the model’s knowledge learned on a publicly available (source) data set can be transferred to the target domain without using target labels, where the target domain is defined by the specific setup (CT modality, study protocols, applied pre- and processing) at the point of use (e.g., a specific clinic with its specific CT study protocols). With our approach, a model is trained on the source and target data set in parallel. The model optimizes a supervised loss for labeled samples from the source domain and the DSL loss function based on domain-specific “sanity checks” for samples from the unlabeled target domain. Without using labels from the target domain, we are able to identify vertebra centroids with an accuracy of 72.8%. By adding only ten target labels during training the accuracy increases to 89.2%, which is on par with the current state-of-the-art for full supervised learning, while using about 20 times less labels. Thus, our model can be used to extract 2D slices from 3D CT scans on arbitrary data sets fully automatically without requiring an extensive labeling effort, contributing to the clinical adoption of medical imaging by hospitals
    • …
    corecore