497 research outputs found
Fully automatic cervical vertebrae segmentation framework for X-ray images
This is the author accepted manuscript. The final version is available from Elsevier via the DOI in this record.The cervical spine is a highly flexible anatomy and therefore vulnerable to injuries. Unfortunately, a large number of injuries in lateral cervical X-ray images remain undiagnosed due to human errors. Computer-aided injury detection has the potential to reduce the risk of misdiagnosis. Towards building an automatic injury detection system, in this paper, we propose a deep learning-based fully automatic framework for segmentation of cervical vertebrae in X-ray images. The framework first localizes the spinal region in the image using a deep fully convolutional neural network. Then vertebra centers are localized using a novel deep probabilistic spatial regression network. Finally, a novel shape-aware deep segmentation network is used to segment the vertebrae in the image. The framework can take an X-ray image and produce a vertebrae segmentation result without any manual intervention. Each block of the fully automatic framework has been trained on a set of 124 X-ray images and tested on another 172 images, all collected from real-life hospital emergency rooms. A Dice similarity coefficient of 0.84 and a shape error of 1.69 mm have been achieved
Recommended from our members
Probabilistic Spatial Regression using a Deep Fully Convolutional Neural Network
Probabilistic predictions are often preferred in computer vision problems because they can provide a confidence of the predicted value. The recent dominant model for computer vision problems, the convolutional neural network, produces probabilistic output for classification and segmentation problems. But probabilistic regression using neural networks is not well defined. In this work, we present a novel fully convolutional neural network capable of producing a spatial probabilistic distribution for localizing image landmarks. We have introduced a new network layer and a novel loss function for the network to produce a two-dimensional probability map. The proposed network has been used in a novel framework to localize vertebral corners for lateral cervical Xray images. The framework has been evaluated on a dataset of 172 images consisting 797 vertebrae and 3,188 vertebral corners. The proposed framework has demonstrated promising performance in localizing vertebral corners, with a relative improvement of 38% over the previous state-of-the-art
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Recommended from our members
Patch-based Corner Detection for Cervical Vertebrae in X-ray Images
Corners hold vital information about size, shape and morphology of a vertebra in an x-ray image, and recent literature [1, 2] has shown promising performance for detecting vertebral corners using a Hough forest-based architecture. To provide spatial context, this method generates a set of 12 patches around a vertebra and uses a machine learning approach to predict corners of a vertebral body through a voting process. In this paper, we extend this framework in terms of patch generation and prediction methods. During patch generation, the square region of interest has been replaced with data-driven rectangular and trapezoidal region of interest which better aligns the patches to the vertebral body geometry, resulting in more discriminative feature vectors. The corner estimation or the prediction stage has been improved by utilising more efficient voting process using a single kernel density estimation. In addition, advanced and more complex feature vectors are introduced. We also present a thorough evaluation of the framework with different patch generation methods, forest training mechanisms and prediction methods. In order to compare the performance of this framework with a more general method, a novel multi-scale Harris corner detector-based approach is introduced that incorporates a spatial prior through a naive Bayes method. All these methods have been tested on a dataset of 90 X-ray images and achieved an average corner localization error of 2.01 mm, representing a 33% improvement in localisation accuracy compared to the previous state-of-the-art method [2]
Recommended from our members
Fully automatic image analysis framework for cervical vertebra in X-ray images
Despite the advancement in imaging technologies, a fifth of the injuries in the cervical spine remain unnoticed in the X-ray radiological exam. About a two-third of the subjects with unnoticed injuries suffer tragic consequences. Based on the success of computer-aided systems in several medical image modalities to enhance clinical interpretation, we have proposed a fully automatic image analysis framework for cervical vertebrae in X-ray images. The framework takes an X-ray image as input and highlights different vertebral features at the output. To the best of our knowledge, this is the first fully automatic system in the literature for the analysis of the cervical vertebrae.
The complete framework has been built by cascading specialized modules, each of which addresses a specific computer vision problem. This dissertation explores data-driven supervised machine learning solutions to these problems. Given an input X-ray image, the first module localizes the spinal region. The second module predicts vertebral centers from the spinal region which are then used to generate vertebral image patches. These patches are then passed through machine learning modules that detect vertebral corners, highlight vertebral boundaries, segment vertebral body and predict vertebral shapes.
In the process of building the complete framework, we have proposed and compared different solutions to the problems addressed by each of the modules. A novel region-aware dense classification deep neural network has been proposed for the first module to address the spine localization problem. The proposed network outperformed the standard dense classification network and random forestbased methods.
Location of the vertebral centers and corners vary based on human interpretation and thus are better represented by probability maps than single points. To learn the mapping between the vertebral image patches and the probability maps, a novel neural network capable of predicting a spatially distributed probabilistic distribution has been proposed. The network achieved expert-level performance in localizing vertebral centers and outperform the Harris corner detector and Hough forest-based methods for corner localization. The proposed network has also shown its capability for detecting vertebral boundaries and produced visually better results than the dense classification network-based boundary detectors.
Segmentation of the vertebral body is a crucial part of the proposed framework. A new shapeaware loss function has been proposed for training a segmentation network to encourage prediction of vertebra-like structures. The segmentation performance improved significantly, however, the pixel-wise nature of proposed loss function was not able to constrain the predictions adequately. To solve the problem a novel neural network was proposed which predicts vertebral shapes and trains on a loss function defined in the shape space. The proposed shape predictor network was capable of learning better topological information about the vertebra than the shape-aware segmentation network.
The methods proposed in this dissertation have been trained and tested on a challenging dataset of X-ray images collected from medical emergency rooms. The proposed, first-of-its-kind, fully automatic framework produces state-of-the-art results both quantitatively and qualitatively
Deformable Multisurface Segmentation of the Spine for Orthopedic Surgery Planning and Simulation
Purpose: We describe a shape-aware multisurface simplex deformable model for the segmentation of healthy as well as pathological lumbar spine in medical image data.
Approach: This model provides an accurate and robust segmentation scheme for the identification of intervertebral disc pathologies to enable the minimally supervised planning and patient-specific simulation of spine surgery, in a manner that combines multisurface and shape statistics-based variants of the deformable simplex model. Statistical shape variation within the dataset has been captured by application of principal component analysis and incorporated during the segmentation process to refine results. In the case where shape statistics hinder detection of the pathological region, user assistance is allowed to disable the prior shape influence during deformation.
Results: Results demonstrate validation against user-assisted expert segmentation, showing excellent boundary agreement and prevention of spatial overlap between neighboring surfaces. This section also plots the characteristics of the statistical shape model, such as compactness, generalizability and specificity, as a function of the number of modes used to represent the family of shapes. Final results demonstrate a proof-of-concept deformation application based on the open-source surgery simulation Simulation Open Framework Architecture toolkit.
Conclusions: To summarize, we present a deformable multisurface model that embeds a shape statistics force, with applications to surgery planning and simulation
Benchmarking Encoder-Decoder Architectures for Biplanar X-ray to 3D Shape Reconstruction
Various deep learning models have been proposed for 3D bone shape
reconstruction from two orthogonal (biplanar) X-ray images. However, it is
unclear how these models compare against each other since they are evaluated on
different anatomy, cohort and (often privately held) datasets. Moreover, the
impact of the commonly optimized image-based segmentation metrics such as dice
score on the estimation of clinical parameters relevant in 2D-3D bone shape
reconstruction is not well known. To move closer toward clinical translation,
we propose a benchmarking framework that evaluates tasks relevant to real-world
clinical scenarios, including reconstruction of fractured bones, bones with
implants, robustness to population shift, and error in estimating clinical
parameters. Our open-source platform provides reference implementations of 8
models (many of whose implementations were not publicly available), APIs to
easily collect and preprocess 6 public datasets, and the implementation of
automatic clinical parameter and landmark extraction methods. We present an
extensive evaluation of 8 2D-3D models on equal footing using 6 public datasets
comprising images for four different anatomies. Our results show that
attention-based methods that capture global spatial relationships tend to
perform better across all anatomies and datasets; performance on clinically
relevant subgroups may be overestimated without disaggregated reporting; ribs
are substantially more difficult to reconstruct compared to femur, hip and
spine; and the dice score improvement does not always bring a corresponding
improvement in the automatic estimation of clinically relevant parameters.Comment: accepted to NeurIPS 202
- …