4 research outputs found

    Fully automatic reconstruction of personalized 3D volumes of the proximal femur from 2D X-ray images

    No full text
    PURPOSE: Accurate preoperative planning is crucial for the outcome of total hip arthroplasty. Recently, 2D pelvic X-ray radiographs have been replaced by 3D CT. However, CT suffers from relatively high radiation dosage and cost. An alternative is to reconstruct a 3D patient-specific volume data from 2D X-ray images. METHODS: In this paper, based on a fully automatic image segmentation algorithm, we propose a new control point-based 2D-3D registration approach for a deformable registration of a 3D volumetric template to a limited number of 2D calibrated X-ray images and show its application to personalized reconstruction of 3D volumes of the proximal femur. The 2D-3D registration is done with a hierarchical two-stage strategy: the scaled-rigid 2D-3D registration stage followed by a regularized deformable B-spline 2D-3D registration stage. In both stages, a set of control points with uniform spacing are placed over the domain of the 3D volumetric template first. The registration is then driven by computing updated positions of these control points with intensity-based 2D-2D image registrations of the input X-ray images with the associated digitally reconstructed radiographs, which allows computing the associated registration transformation at each stage. RESULTS: Evaluated on datasets of 44 patients, our method achieved an overall surface reconstruction accuracy of [Formula: see text] and an average Dice coefficient of [Formula: see text]. We further investigated the cortical bone region reconstruction accuracy, which is important for planning cementless total hip arthroplasty. An average cortical bone region Dice coefficient of [Formula: see text] and an inner cortical bone surface reconstruction accuracy of [Formula: see text] were found. CONCLUSIONS: In summary, we developed a new approach for reconstruction of 3D personalized volumes of the proximal femur from 2D X-ray images. Comprehensive experiments demonstrated the efficacy of the present approach

    3D Shape Reconstruction of Knee Bones from Low Radiation X-ray Images Using Deep Learning

    Get PDF
    Understanding the bone kinematics of the human knee during dynamic motions is necessary to evaluate the pathological conditions, design knee prosthesis, orthosis and surgical treatments such as knee arthroplasty. Also, knee bone kinematics is essential to assess the biofidelity of the computational models. Kinematics of the human knee has been reported in the literature either using in vitro or in vivo methodologies. In vivo methodology is widely preferred due to biomechanical accuracies. However, it is challenging to obtain the kinematic data in vivo due to limitations in existing methods. One of the several existing methods used in such application is using X-ray fluoroscopy imaging, which allows for the non-invasive quantification of bone kinematics. In the fluoroscopy imaging method, due to procedural simplicity and low radiation exposure, single-plane fluoroscopy (SF) is the preferred tool to study the in vivo kinematics of the knee joint. Evaluation of the three-dimensional (3D) kinematics from the SF imagery is possible only if prior knowledge of the shape of the knee bones is available. The standard technique for acquiring the knee shape is to either segment Magnetic Resonance (MR) images, which is expensive to procure, or Computed Tomography (CT) images, which exposes the subjects to a heavy dose of ionizing radiation. Additionally, both the segmentation procedures are time-consuming and labour-intensive. An alternative technique that is rarely used is to reconstruct the knee shape from the SF images. It is less expensive than MR imaging, exposes the subjects to relatively lower radiation than CT imaging, and since the kinematic study and the shape reconstruction could be carried out using the same device, it could save a considerable amount of time for the researchers and the subjects. However, due to low exposure levels, SF images are often characterized by a low signal-to-noise ratio, making it difficult to extract the required information to reconstruct the shape accurately. In comparison to conventional X-ray images, SF images are of lower quality and have less detail. Additionally, existing methods for reconstructing the shape of the knee remain generally inconvenient since they need a highly controlled system: images must be captured from a calibrated device, care must be taken while positioning the subject's knee in the X-ray field to ensure image consistency, and user intervention and expert knowledge is required for 3D reconstruction. In an attempt to simplify the existing process, this thesis proposes a new methodology to reconstruct the 3D shape of the knee bones from multiple uncalibrated SF images using deep learning. During the image acquisition using the SF, the subjects in this approach can freely rotate their leg (in a fully extended, knee-locked position), resulting in several images captured in arbitrary poses. Relevant features are extracted from these images using a novel feature extraction technique before feeding it to a custom-built Convolutional Neural Network (CNN). The network, without further optimization, directly outputs a meshed 3D surface model of the subject's knee joint. The whole procedure could be completed in a few minutes. The robust feature extraction technique can effectively extract relevant information from a range of image qualities. When tested on eight unseen sets of SF images with known true geometry, the network reconstructed knee shape models with a shape error (RMSE) of 1.91± 0.30 mm for the femur, 2.3± 0.36 mm for the tibia and 3.3± 0.53 mm for the patella. The error was calculated after rigidly aligning (scale, rotation, and translation) each of the reconstructed shape models with the corresponding known true geometry (obtained through MRI segmentation). Based on a previous study that examined the influence of reconstructed shape accuracy on the precision of the evaluation of tibiofemoral kinematics, the shape accuracy of the proposed methodology might be adequate to precisely track the bone kinematics, although further investigation is required

    A deep learning algorithm for contour detection in synthetic 2D biplanar X-ray images of the scapula: towards improved 3D reconstruction of the scapula

    Get PDF
    Three-dimensional (3D) reconstruction from X-ray images using statistical shape models (SSM) provides a cost-effective way of increasing the diagnostic utility of two-dimensional (2D) X-ray images, especially in low-resource settings. The landmark-constrained model fitting approach is one way to obtain patient-specific models from a statistical model. This approach requires an accurate selection of corresponding features, usually landmarks, from the bi-planar X-ray images. However, X-ray images are 2D representations of 3D anatomy with super-positioned structures, which confounds this approach. The literature shows that detection and use of contours to locate corresponding landmarks within biplanar X-ray images can address this limitation. The aim of this research project was to train and validate a deep learning algorithm for detection the contour of a scapula in synthetic 2D bi-planar Xray images. Synthetic bi-planar X-ray images were obtained from scapula mesh samples with annotated landmarks generated from a validated SSM obtained from the Division of Biomedical Engineering, University of Cape Town. This was followed by the training of two convolutional neural network models as the first objective of the project; the first model was trained to predict the lateral (LAT) scapula image given the anterior-posterior (AP) image. The second model was trained to predict the AP image given the LAT image. The trained models had an average Dice coefficient value of 0.926 and 0.964 for the predicted LAT and AP images, respectively. However, the trained models did not generalise to the segmented real X-ray images of the scapula. The second objective was to perform landmark-constrained model fitting using the corresponding landmarks embedded in the predicted images. To achieve this objective, the 2D landmark locations were transformed into 3D coordinates using the direct linear transformation. The 3D point localization yielded average errors of (0.35, 0.64, 0.72) mm in the X, Y and Z directions, respectively, and a combined coordinate error of 1.16 mm. The reconstructed landmarks were used to reconstruct meshes that had average surface-to-surface distances of 3.22 mm and 1.72 mm for 3 and 6 landmarks, respectively. The third objective was to reconstruct the scapula mesh using matching points on the scapula contour in the bi-planar images. The average surface-to-surface distances of the reconstructed meshes with 8 matching contour points and 6 corresponding landmarks of the same meshes were 1.40 and 1.91 mm, respectively. In summary, the deep learning models were able to learn the mapping between the bi-planar images of the scapula. Increasing the number of corresponding landmarks from the bi-planar images resulted into better 3D reconstructions. However, obtaining these corresponding landmarks was non-trivial, necessitating the use of matching points selected from the scapulae contours. The results from the latter approach signal a need to explore contour matching methods to obtain more corresponding points in order to improve the scapula 3D reconstruction using landmark-constrained model fitting
    corecore