573 research outputs found

    3D shape reconstruction of the femur from planar X-ray images using statistical shape and appearance models

    Get PDF
    Major trauma is a condition that can result in severe bone damage. Customised orthopaedic reconstruction allows for limb salvage surgery and helps to restore joint alignment. For the best possible outcome three dimensional (3D) medical imaging is necessary, but its availability and access, especially in developing countries, can be challenging. In this study, 3D bone shapes of the femur reconstructed from planar radiographs representing bone defects were evaluated for use in orthopaedic surgery. Statistical shape and appearance models generated from 40 cadaveric X-ray computed tomography (CT) images were used to reconstruct 3D bone shapes. The reconstruction simulated bone defects of between 0% and 50% of the whole bone, and the prediction accuracy using anterior–posterior (AP) and anterior–posterior/medial–lateral (AP/ML) X-rays were compared. As error metrics for the comparison, measures evaluating the distance between contour lines of the projections as well as a measure comparing similarities in image intensities were used. The results were evaluated using the root-mean-square distance for surface error as well as differences in commonly used anatomical measures, including bow, femoral neck, diaphyseal–condylar and version angles between reconstructed surfaces from the shape model and the intact shape reconstructed from the CT image. The reconstructions had average surface errors between 1.59 and 3.59 mm with reconstructions using the contour error metric from the AP/ML directions being the most accurate. Predictions of bow and femoral neck angles were well below the clinical threshold accuracy of 3°, diaphyseal–condylar angles were around the threshold of 3° and only version angle predictions of between 5.3° and 9.3° were above the clinical threshold, but below the range reported in clinical practice using computer navigation (i.e., 17° internal to 15° external rotation). This study shows that the reconstructions from partly available planar images using statistical shape and appearance models had an accuracy which would support their potential use in orthopaedic reconstruction

    Automatic image analysis of C-arm Computed Tomography images for ankle joint surgeries

    Get PDF
    Open reduction and internal fixation is a standard procedure in ankle surgery for treating a fractured fibula. Since fibula fractures are often accompanied by an injury of the syndesmosis complex, it is essential to restore the correct relative pose of the fibula relative to the adjoining tibia for the ligaments to heal. Otherwise, the patient might experience instability of the ankle leading to arthritis and ankle pain and ultimately revision surgery. Incorrect positioning referred to as malreduction of the fibula is assumed to be one of the major causes of unsuccessful ankle surgery. 3D C-arm imaging is the current standard procedure for revealing malreduction of fractures in the operating room. However, intra-operative visual inspection of the reduction result is complicated due to high inter-individual variation of the ankle anatomy and rather based on the subjective experience of the surgeon. A contralateral side comparison with the patient’s uninjured ankle is recommended but has not been integrated into clinical routine due to the high level of radiation exposure it incurs. This thesis presents the first approach towards a computer-assisted intra-operative contralateral side comparison of the ankle joint. The focus of this thesis was the design, development and validation of a software-based prototype for a fully automatic intra-operative assistance system for orthopedic surgeons. The implementation does not require an additional 3D C-arm scan of the uninjured ankle, thus reducing time consumption and cumulative radiation dose. A 3D statistical shape model (SSM) is used to reconstruct a 3D surface model from three 2D fluoroscopic projections representing the uninjured ankle. To this end, a 3D SSM segmentation is performed on the 3D image of the injured ankle to gain prior knowledge of the ankle. A 3D convolutional neural network (CNN) based initialization method was developed and its outcome was incorporated into the SSM adaption step. Segmentation quality was shown to be improved in terms of accuracy and robustness compared to the pure intensity-based SSM. This allows us to overcome the limitations of the previously proposed methods, namely inaccuracy due to metal artifacts and the lack of device-to-patient orientation of the C-arm. A 2D-CNN is employed to extract semantic knowledge from all fluoroscopic projection images. This step of the pipeline both creates features for the subsequent reconstruction and also helps to pre-initialize the 3D-SSM without user interaction. A 2D-3D multi-bone reconstruction method has been developed which uses distance maps of the 2D features for fast and accurate correspondence optimization and SSM adaption. This is the central and most crucial component of the workflow. This is the first time that a bone reconstruction method has been applied to the complex ankle joint and the first reconstruction method using CNN based segmentations as features. The reconstructed 3D-SSM of the uninjured ankle can be back-projected and visualized in a workflow-oriented manner to procure clear visualization of the region of interest, which is essential for the evaluation of the reduction result. The surgeon can thus directly compare an overlay of the contralateral ankle with the injured ankle. The developed methods were evaluated individually using data sets acquired during a cadaver study and representative clinical data acquired during fibular reduction. A hierarchical evaluation was designed to assess the inaccuracies of the system on different levels and to identify major sources of error. The overall evaluation performed on eleven challenging clinical datasets acquired for manual contralateral side comparison showed that the system is capable of accurately reconstructing 3D surface models of the uninjured ankle solely using three projection images. A mean Hausdorff distance of 1.72 mm was measured when comparing the reconstruction result to the ground truth segmentation and almost achieved the high required clinical accuracy of 1-2 mm. The overall error of the pipeline was mainly attributed to inaccuracies in the 2D-CNN segmentation. The consistency of these results requires further validation on a larger dataset. The workflow proposed in this thesis establishes the first approach to enable automatic computer-assisted contralateral side comparison in ankle surgery. The feasibility of the proposed approach was proven on a limited amount of clinical cases and has already yielded good results. The next important step is to alleviate the identified bottlenecks in the approach by providing more training data in order to further improve the accuracy. In conclusion, the new approach presented gives the chance to guide the surgeon during the reduction process, improve the surgical outcome while avoiding additional radiation exposure and reduce the number of revision surgeries in the long term

    Book of Abstracts 15th International Symposium on Computer Methods in Biomechanics and Biomedical Engineering and 3rd Conference on Imaging and Visualization

    Get PDF
    In this edition, the two events will run together as a single conference, highlighting the strong connection with the Taylor & Francis journals: Computer Methods in Biomechanics and Biomedical Engineering (John Middleton and Christopher Jacobs, Eds.) and Computer Methods in Biomechanics and Biomedical Engineering: Imaging and Visualization (JoãoManuel R.S. Tavares, Ed.). The conference has become a major international meeting on computational biomechanics, imaging andvisualization. In this edition, the main program includes 212 presentations. In addition, sixteen renowned researchers will give plenary keynotes, addressing current challenges in computational biomechanics and biomedical imaging. In Lisbon, for the first time, a session dedicated to award the winner of the Best Paper in CMBBE Journal will take place. We believe that CMBBE2018 will have a strong impact on the development of computational biomechanics and biomedical imaging and visualization, identifying emerging areas of research and promoting the collaboration and networking between participants. This impact is evidenced through the well-known research groups, commercial companies and scientific organizations, who continue to support and sponsor the CMBBE meeting series. In fact, the conference is enriched with five workshops on specific scientific topics and commercial software.info:eu-repo/semantics/draf

    A shape analysis approach to prediction of bone stiffness using FEXI

    Get PDF
    The preferred method of assessing the risk of an osteoporosis related fracture is currently a measure of bone mineral density (BMD) by dual energy X-ray absorptiometry (DXA). However, other factors contribute to the overall risk of fracture, including anatomical geometry and the spatial distribution of bone. Finite element analysis can be performed in both two and three dimensions, and predicts the deformation or induced stress when a load is applied to a structure (such as a bone) of defined material composition and shape. The simulation of a mechanical compression test provides a measure of whole bone stiffness (N mm−1). A simulation system was developed to study the sensitivity of BMD, 3D and 2D finite element analysis to variations in geometric parameters of a virtual proximal femur model. This study demonstrated that 3D FE and 2D FE (FEXI) were significantly more sensitive to the anatomical shape and composition of the proximal femur than conventional BMD. The simulation approach helped to analyse and understand how variations in geometric parameters affect the stiffness and hence strength of a bone susceptible to osteoporotic fracture. Originally, the FEXI technique modelled the femur as a thin plate model of an assumed constant depth for finite element analysis (FEA). A better prediction of tissue depth across the bone, based on its geometry, was required to provide a more accurate model for FEA. A shape template was developed for the proximal femur to provide this information for the 3D FE analysis. Geometric morphometric techniques were used to procure and analyse shape information from a set of CT scans of excised human femora. Generalized Procrustes Analysis and Thin Plate Splines were employed to analyse the data and generate a shape template for the proximal femur. 2D Offset and Depth maps generated from the training set data were then combined to model the three-dimensional shape of the bone. The template was used to predict the three-dimensional bone shape from a 2D image of the proximal femur procured through a DXA scan. The error in the predicted 3D shape was measured as the difference in predicted and actual depths at each pixel. The mean error in predicted depths was found to be 1.7mm compared to an average bone depth of 34mm. 3D FEXI analysis on the predicted 3D bone along with 2D FEXI for a stance loading condition and BMD measurement were performed based on 2D radiographic projections of the CT scans and compared to bone stiffness results obtained from finite element analysis of the original 3D CT scans. 3D FEXI provided a significantly higher correlation (R2 = 0.85) with conventional CT derived 3D finite element analysis than achieved with both BMD (R2 = 0.52) and 2D FEXI (R2 = 0.44)

    High-Resolution Quantitative Cone-Beam Computed Tomography: Systems, Modeling, and Analysis for Improved Musculoskeletal Imaging

    Get PDF
    This dissertation applies accurate models of imaging physics, new high-resolution imaging hardware, and novel image analysis techniques to benefit quantitative applications of x-ray CT in in vivo assessment of bone health. We pursue three Aims: 1. Characterization of macroscopic joint space morphology, 2. Estimation of bone mineral density (BMD), and 3. Visualization of bone microstructure. This work contributes to the development of extremity cone-beam CT (CBCT), a compact system for musculoskeletal (MSK) imaging. Joint space morphology is characterized by a model which draws an analogy between the bones of a joint and the plates of a capacitor. Virtual electric field lines connecting the two surfaces of the joint are computed as a surrogate measure of joint space width, creating a rich, non-degenerate, adaptive map of the joint space. We showed that by using such maps, a classifier can outperform radiologist measurements at identifying osteoarthritic patients in a set of CBCT scans. Quantitative BMD accuracy is achieved by combining a polyenergetic model-based iterative reconstruction (MBIR) method with fast Monte Carlo (MC) scatter estimation. On a benchtop system emulating extremity CBCT, we validated BMD accuracy and reproducibility via a series of phantom studies involving inserts of known mineral concentrations and a cadaver specimen. High-resolution imaging is achieved using a complementary metal-oxide semiconductor (CMOS)-based x-ray detector featuring small pixel size and low readout noise. A cascaded systems model was used to performed task-based optimization to determine optimal detector scintillator thickness in nominal extremity CBCT imaging conditions. We validated the performance of a prototype scanner incorporating our optimization result. Strong correlation was found between bone microstructure metrics obtained from the prototype scanner and µCT gold standard for trabecular bone samples from a cadaver ulna. Additionally, we devised a multiresolution reconstruction scheme allowing fast MBIR to be applied to large, high-resolution projection data. To model the full scanned volume in the reconstruction forward model, regions outside a finely sampled region-of-interest (ROI) are downsampled, reducing runtime and cutting memory requirements while maintaining image quality in the ROI

    3D Shape Reconstruction of Knee Bones from Low Radiation X-ray Images Using Deep Learning

    Get PDF
    Understanding the bone kinematics of the human knee during dynamic motions is necessary to evaluate the pathological conditions, design knee prosthesis, orthosis and surgical treatments such as knee arthroplasty. Also, knee bone kinematics is essential to assess the biofidelity of the computational models. Kinematics of the human knee has been reported in the literature either using in vitro or in vivo methodologies. In vivo methodology is widely preferred due to biomechanical accuracies. However, it is challenging to obtain the kinematic data in vivo due to limitations in existing methods. One of the several existing methods used in such application is using X-ray fluoroscopy imaging, which allows for the non-invasive quantification of bone kinematics. In the fluoroscopy imaging method, due to procedural simplicity and low radiation exposure, single-plane fluoroscopy (SF) is the preferred tool to study the in vivo kinematics of the knee joint. Evaluation of the three-dimensional (3D) kinematics from the SF imagery is possible only if prior knowledge of the shape of the knee bones is available. The standard technique for acquiring the knee shape is to either segment Magnetic Resonance (MR) images, which is expensive to procure, or Computed Tomography (CT) images, which exposes the subjects to a heavy dose of ionizing radiation. Additionally, both the segmentation procedures are time-consuming and labour-intensive. An alternative technique that is rarely used is to reconstruct the knee shape from the SF images. It is less expensive than MR imaging, exposes the subjects to relatively lower radiation than CT imaging, and since the kinematic study and the shape reconstruction could be carried out using the same device, it could save a considerable amount of time for the researchers and the subjects. However, due to low exposure levels, SF images are often characterized by a low signal-to-noise ratio, making it difficult to extract the required information to reconstruct the shape accurately. In comparison to conventional X-ray images, SF images are of lower quality and have less detail. Additionally, existing methods for reconstructing the shape of the knee remain generally inconvenient since they need a highly controlled system: images must be captured from a calibrated device, care must be taken while positioning the subject's knee in the X-ray field to ensure image consistency, and user intervention and expert knowledge is required for 3D reconstruction. In an attempt to simplify the existing process, this thesis proposes a new methodology to reconstruct the 3D shape of the knee bones from multiple uncalibrated SF images using deep learning. During the image acquisition using the SF, the subjects in this approach can freely rotate their leg (in a fully extended, knee-locked position), resulting in several images captured in arbitrary poses. Relevant features are extracted from these images using a novel feature extraction technique before feeding it to a custom-built Convolutional Neural Network (CNN). The network, without further optimization, directly outputs a meshed 3D surface model of the subject's knee joint. The whole procedure could be completed in a few minutes. The robust feature extraction technique can effectively extract relevant information from a range of image qualities. When tested on eight unseen sets of SF images with known true geometry, the network reconstructed knee shape models with a shape error (RMSE) of 1.91± 0.30 mm for the femur, 2.3± 0.36 mm for the tibia and 3.3± 0.53 mm for the patella. The error was calculated after rigidly aligning (scale, rotation, and translation) each of the reconstructed shape models with the corresponding known true geometry (obtained through MRI segmentation). Based on a previous study that examined the influence of reconstructed shape accuracy on the precision of the evaluation of tibiofemoral kinematics, the shape accuracy of the proposed methodology might be adequate to precisely track the bone kinematics, although further investigation is required

    Augmented Reality and Artificial Intelligence in Image-Guided and Robot-Assisted Interventions

    Get PDF
    In minimally invasive orthopedic procedures, the surgeon places wires, screws, and surgical implants through the muscles and bony structures under image guidance. These interventions require alignment of the pre- and intra-operative patient data, the intra-operative scanner, surgical instruments, and the patient. Suboptimal interaction with patient data and challenges in mastering 3D anatomy based on ill-posed 2D interventional images are essential concerns in image-guided therapies. State of the art approaches often support the surgeon by using external navigation systems or ill-conditioned image-based registration methods that both have certain drawbacks. Augmented reality (AR) has been introduced in the operating rooms in the last decade; however, in image-guided interventions, it has often only been considered as a visualization device improving traditional workflows. Consequently, the technology is gaining minimum maturity that it requires to redefine new procedures, user interfaces, and interactions. This dissertation investigates the applications of AR, artificial intelligence, and robotics in interventional medicine. Our solutions were applied in a broad spectrum of problems for various tasks, namely improving imaging and acquisition, image computing and analytics for registration and image understanding, and enhancing the interventional visualization. The benefits of these approaches were also discovered in robot-assisted interventions. We revealed how exemplary workflows are redefined via AR by taking full advantage of head-mounted displays when entirely co-registered with the imaging systems and the environment at all times. The proposed AR landscape is enabled by co-localizing the users and the imaging devices via the operating room environment and exploiting all involved frustums to move spatial information between different bodies. The system's awareness of the geometric and physical characteristics of X-ray imaging allows the exploration of different human-machine interfaces. We also leveraged the principles governing image formation and combined it with deep learning and RGBD sensing to fuse images and reconstruct interventional data. We hope that our holistic approaches towards improving the interface of surgery and enhancing the usability of interventional imaging, not only augments the surgeon's capabilities but also augments the surgical team's experience in carrying out an effective intervention with reduced complications

    Subject-specific musculoskeletal model of the lower limb in a lying and standing position

    Get PDF
    Accurate estimation of joint loads implies using subject-specific musculoskeletal models. Moreover, as the lines of action of the muscles are dictated by the soft tissues, which are in turn influenced by gravitational forces, we developed a method to build subject-specific models of the lower limb in a functional standing position. Bones and skin envelope were obtained in a standing position, whereas muscles and a set of bony landmarks were obtained from conventional magnetic resonance images in a lying position. These muscles were merged with the subject-specific skeletal model using a nonlinear transformation, taking into account soft tissue movements and gravitational effects. Seven asymptomatic lower limbs were modelled using this method, and results showed realistic deformations. Comparing the subject-specific skeletal model to a scaled reference model rendered differences in terms of muscle length up to 4% and in terms of moment arm for adductor muscles up to 30%. These preliminary findings enlightened the importance of subject-specific modelling in a functional position

    Mobile Wound Assessment and 3D Modeling from a Single Image

    Get PDF
    The prevalence of camera-enabled mobile phones have made mobile wound assessment a viable treatment option for millions of previously difficult to reach patients. We have designed a complete mobile wound assessment platform to ameliorate the many challenges related to chronic wound care. Chronic wounds and infections are the most severe, costly and fatal types of wounds, placing them at the center of mobile wound assessment. Wound physicians assess thousands of single-view wound images from all over the world, and it may be difficult to determine the location of the wound on the body, for example, if the wound is taken at close range. In our solution, end-users capture an image of the wound by taking a picture with their mobile camera. The wound image is segmented and classified using modern convolution neural networks, and is stored securely in the cloud for remote tracking. We use an interactive semi-automated approach to allow users to specify the location of the wound on the body. To accomplish this we have created, to the best our knowledge, the first 3D human surface anatomy labeling system, based off the current NYU and Anatomy Mapper labeling systems. To interactively view wounds in 3D, we have presented an efficient projective texture mapping algorithm for texturing wounds onto a 3D human anatomy model. In so doing, we have demonstrated an approach to 3D wound reconstruction that works even for a single wound image
    corecore