15 research outputs found

    A fresh look at spinal alignment and deformities: Automated analysis of a large database of 9832 biplanar radiographs

    Get PDF
    We developed and used a deep learning tool to process biplanar radiographs of 9,832 non-surgical patients suffering from spinal deformities, with the aim of reporting the statistical distribution of radiological parameters describing the spinal shape and the correlations and interdependencies between them. An existing tool able to automatically perform a three-dimensional reconstruction of the thoracolumbar spine has been improved and used to analyze a large set of biplanar radiographs of the trunk. For all patients, the following parameters were calculated: spinopelvic parameters; lumbar lordosis; mismatch between pelvic incidence and lumbar lordosis; thoracic kyphosis; maximal coronal Cobb angle; sagittal vertical axis; T1-pelvic angle; maximal vertebral rotation in the transverse plane. The radiological parameters describing the sagittal alignment were found to be highly interrelated with each other, as well as dependent on age, while sex had relatively minor but statistically significant importance. Lumbar lordosis was associated with thoracic kyphosis, pelvic incidence and sagittal vertical axis. The pelvic incidence-lumbar lordosis mismatch was found to be dependent on the pelvic incidence and on age. Scoliosis had a distinct association with the sagittal alignment in adolescent and adult subjects. The deep learning-based tool allowed for the analysis of a large imaging database which would not be reasonably feasible if performed by human operators. The large set of results will be valuable to trigger new research questions in the field of spinal deformities, as well as to challenge the current knowledge

    Benchmarking Encoder-Decoder Architectures for Biplanar X-ray to 3D Shape Reconstruction

    Full text link
    Various deep learning models have been proposed for 3D bone shape reconstruction from two orthogonal (biplanar) X-ray images. However, it is unclear how these models compare against each other since they are evaluated on different anatomy, cohort and (often privately held) datasets. Moreover, the impact of the commonly optimized image-based segmentation metrics such as dice score on the estimation of clinical parameters relevant in 2D-3D bone shape reconstruction is not well known. To move closer toward clinical translation, we propose a benchmarking framework that evaluates tasks relevant to real-world clinical scenarios, including reconstruction of fractured bones, bones with implants, robustness to population shift, and error in estimating clinical parameters. Our open-source platform provides reference implementations of 8 models (many of whose implementations were not publicly available), APIs to easily collect and preprocess 6 public datasets, and the implementation of automatic clinical parameter and landmark extraction methods. We present an extensive evaluation of 8 2D-3D models on equal footing using 6 public datasets comprising images for four different anatomies. Our results show that attention-based methods that capture global spatial relationships tend to perform better across all anatomies and datasets; performance on clinically relevant subgroups may be overestimated without disaggregated reporting; ribs are substantially more difficult to reconstruct compared to femur, hip and spine; and the dice score improvement does not always bring a corresponding improvement in the automatic estimation of clinically relevant parameters.Comment: accepted to NeurIPS 202

    A convolutional neural network to detect scoliosis treatment in radiographs

    Get PDF
    Purpose The aim of this work is to propose a classification algorithm to automatically detect treatment for scoliosis (brace, implant or no treatment) in postero-anterior radiographs. Such automatic labelling of radiographs could represent a step towards global automatic radiological analysis. Methods Seven hundred and ninety-six frontal radiographies of adolescents were collected (84 patients wearing a brace, 325 with a spinal implant and 387 reference images with no treatment). The dataset was augmented to a total of 2096 images. A classification model was built, composed by a forward convolutional neural network (CNN) followed by a discriminant analysis; the output was a probability for a given image to contain a brace, a spinal implant or none. The model was validated with a stratified tenfold cross-validation procedure. Performance was estimated by calculating the average accuracy. Results 98.3% of the radiographs were correctly classified as either reference, brace or implant, excluding 2.0% unclassified images. 99.7% of brace radiographs were correctly detected, while most of the errors occurred in the reference group (i.e. 2.1% of reference images were wrongly classified). Conclusion The proposed classification model, the originality of which is the coupling of a CNN with discriminant analysis, can be used to automatically label radiographs for the presence of scoliosis treatment. This information is usually missing from DICOM metadata, so such method could facilitate the use of large databases. Furthermore, the same model architecture could potentially be applied for other radiograph classifications, such as sex and presence of scoliotic deformity.Acknowledgements The authors are grateful to the ParisTech BiomecAM chair program on subject-specific musculoskeletal modelling (with the support of ParisTech and Yves Cotrel Foundations, Société Générale, Proteor and Covea)

    Anatomy-Aware Inference of the 3D Standing Spine Posture from 2D Radiographs

    Full text link
    An important factor for the development of spinal degeneration, pain and the outcome of spinal surgery is known to be the balance of the spine. It must be analyzed in an upright, standing position to ensure physiological loading conditions and visualize load-dependent deformations. Despite the complex 3D shape of the spine, this analysis is currently performed using 2D radiographs, as all frequently used 3D imaging techniques require the patient to be scanned in a prone position. To overcome this limitation, we propose a deep neural network to reconstruct the 3D spinal pose in an upright standing position, loaded naturally. Specifically, we propose a novel neural network architecture, which takes orthogonal 2D radiographs and infers the spine’s 3D posture using vertebral shape priors. In this work, we define vertebral shape priors using an atlas and a spine shape prior, incorporating both into our proposed network architecture. We validate our architecture on digitally reconstructed radiographs, achieving a 3D reconstruction Dice of 0.95, indicating an almost perfect 2D-to-3D domain translation. Validating the reconstruction accuracy of a 3D standing spine on real data is infeasible due to the lack of a valid ground truth. Hence, we design a novel experiment for this purpose, using an orientation invariant distance metric, to evaluate our model’s ability to synthesize full-3D, upright, and patient-specific spine models. We compare the synthesized spine shapes from clinical upright standing radiographs to the same patient’s 3D spinal posture in the prone position from CT

    X23D-Intraoperative 3D Lumbar Spine Shape Reconstruction Based on Sparse Multi-View X-ray Data

    Full text link
    Visual assessment based on intraoperative 2D X-rays remains the predominant aid for intraoperative decision-making, surgical guidance, and error prevention. However, correctly assessing the 3D shape of complex anatomies, such as the spine, based on planar fluoroscopic images remains a challenge even for experienced surgeons. This work proposes a novel deep learning-based method to intraoperatively estimate the 3D shape of patients' lumbar vertebrae directly from sparse, multi-view X-ray data. High-quality and accurate 3D reconstructions were achieved with a learned multi-view stereo machine approach capable of incorporating the X-ray calibration parameters in the neural network. This strategy allowed a priori knowledge of the spinal shape to be acquired while preserving patient specificity and achieving a higher accuracy compared to the state of the art. Our method was trained and evaluated on 17,420 fluoroscopy images that were digitally reconstructed from the public CTSpine1K dataset. As evaluated by unseen data, we achieved an 88% average F1 score and a 71% surface score. Furthermore, by utilizing the calibration parameters of the input X-rays, our method outperformed a counterpart method in the state of the art by 22% in terms of surface score. This increase in accuracy opens new possibilities for surgical navigation and intraoperative decision-making solely based on intraoperative data, especially in surgical applications where the acquisition of 3D image data is not part of the standard clinical workflow

    Current and emerging artificial intelligence applications for pediatric musculoskeletal radiology

    Get PDF
    Artificial intelligence (AI) is playing an ever-increasing role in radiology (more so in the adult world than in pediatrics), to the extent that there are unfounded fears it will completely take over the role of the radiologist. In relation to musculoskeletal applications of AI in pediatric radiology, we are far from the time when AI will replace radiologists; even for the commonest application (bone age assessment), AI is more often employed in an AI-assist mode rather than an AI-replace or AI-extend mode. AI for bone age assessment has been in clinical use for more than a decade and is the area in which most research has been conducted. Most other potential indications in children (such as appendicular and vertebral fracture detection) remain largely in the research domain. This article reviews the areas in which AI is most prominent in relation to the pediatric musculoskeletal system, briefly summarizing the current literature and highlighting areas for future research. Pediatric radiologists are encouraged to participate as members of the research teams conducting pediatric radiology artificial intelligence research

    A deep learning algorithm for contour detection in synthetic 2D biplanar X-ray images of the scapula: towards improved 3D reconstruction of the scapula

    Get PDF
    Three-dimensional (3D) reconstruction from X-ray images using statistical shape models (SSM) provides a cost-effective way of increasing the diagnostic utility of two-dimensional (2D) X-ray images, especially in low-resource settings. The landmark-constrained model fitting approach is one way to obtain patient-specific models from a statistical model. This approach requires an accurate selection of corresponding features, usually landmarks, from the bi-planar X-ray images. However, X-ray images are 2D representations of 3D anatomy with super-positioned structures, which confounds this approach. The literature shows that detection and use of contours to locate corresponding landmarks within biplanar X-ray images can address this limitation. The aim of this research project was to train and validate a deep learning algorithm for detection the contour of a scapula in synthetic 2D bi-planar Xray images. Synthetic bi-planar X-ray images were obtained from scapula mesh samples with annotated landmarks generated from a validated SSM obtained from the Division of Biomedical Engineering, University of Cape Town. This was followed by the training of two convolutional neural network models as the first objective of the project; the first model was trained to predict the lateral (LAT) scapula image given the anterior-posterior (AP) image. The second model was trained to predict the AP image given the LAT image. The trained models had an average Dice coefficient value of 0.926 and 0.964 for the predicted LAT and AP images, respectively. However, the trained models did not generalise to the segmented real X-ray images of the scapula. The second objective was to perform landmark-constrained model fitting using the corresponding landmarks embedded in the predicted images. To achieve this objective, the 2D landmark locations were transformed into 3D coordinates using the direct linear transformation. The 3D point localization yielded average errors of (0.35, 0.64, 0.72) mm in the X, Y and Z directions, respectively, and a combined coordinate error of 1.16 mm. The reconstructed landmarks were used to reconstruct meshes that had average surface-to-surface distances of 3.22 mm and 1.72 mm for 3 and 6 landmarks, respectively. The third objective was to reconstruct the scapula mesh using matching points on the scapula contour in the bi-planar images. The average surface-to-surface distances of the reconstructed meshes with 8 matching contour points and 6 corresponding landmarks of the same meshes were 1.40 and 1.91 mm, respectively. In summary, the deep learning models were able to learn the mapping between the bi-planar images of the scapula. Increasing the number of corresponding landmarks from the bi-planar images resulted into better 3D reconstructions. However, obtaining these corresponding landmarks was non-trivial, necessitating the use of matching points selected from the scapulae contours. The results from the latter approach signal a need to explore contour matching methods to obtain more corresponding points in order to improve the scapula 3D reconstruction using landmark-constrained model fitting

    Applied AI/ML for automatic customisation of medical implants

    Get PDF
    Most knee replacement surgeries are performed using ‘off-the-shelf’ implants, supplied with a set number of standardised sizes. X-rays are taken during pre-operative assessment and used by clinicians to estimate the best options for patients. Manual templating and implant size selection have, however, been shown to be inaccurate, and frequently the generically shaped products do not adequately fit patients’ unique anatomies. Furthermore, off-the-shelf implants are typically made from solid metal and do not exhibit mechanical properties like the native bone. Consequently, the combination of these factors often leads to poor outcomes for patients. Various solutions have been outlined in the literature for customising the size, shape, and stiffness of implants for the specific needs of individuals. Such designs can be fabricated via additive manufacturing which enables bespoke and intricate geometries to be produced in biocompatible materials. Despite this, all customisation solutions identified required some level of manual input to segment image files, identify anatomical features, and/or drive design software. These tasks are time consuming, expensive, and require trained resource. Almost all currently available solutions also require CT imaging, which adds further expense, incurs high levels of potentially harmful radiation, and is not as commonly accessible as X-ray imaging. This thesis explores how various levels of knee replacement customisation can be completed automatically by applying artificial intelligence, machine learning and statistical methods. The principal output is a software application, believed to be the first true ‘mass-customisation’ solution. The software is compatible with both 2D X-ray and 3D CT data and enables fully automatic and accurate implant size prediction, shape customisation and stiffness matching. It is therefore seen to address the key limitations associated with current implant customisation solutions and will hopefully enable the benefits of customisation to be more widely accessible.Open Acces

    Machine learning in orthopedics: a literature review

    Get PDF
    In this paper we present the findings of a systematic literature review covering the articles published in the last two decades in which the authors described the application of a machine learning technique and method to an orthopedic problem or purpose. By searching both in the Scopus and Medline databases, we retrieved, screened and analyzed the content of 70 journal articles, and coded these resources following an iterative method within a Grounded Theory approach. We report the survey findings by outlining the articles\u2019 content in terms of the main machine learning techniques mentioned therein, the orthopedic application domains, the source data and the quality of their predictive performance

    Machine Learning in Orthopedics: A Literature Review

    Get PDF
    In this paper we present the findings of a systematic literature review covering the articles published in the last two decades in which the authors described the application of a machine learning technique and method to an orthopedic problem or purpose. By searching both in the Scopus and Medline databases, we retrieved, screened and analyzed the content of 70 journal articles, and coded these resources following an iterative method within a Grounded Theory approach. We report the survey findings by outlining the articles' content in terms of the main machine learning techniques mentioned therein, the orthopedic application domains, the source data and the quality of their predictive performance
    corecore