82 research outputs found

    Statistical atlas based registration and planning for ablating bone tumors in minimally invasive interventions

    Get PDF
    Bone tumor ablation has been a viable treatment in a minimally invasive way compared with surgical resections. In this paper, two key challenges in the computer-Assisted bone tumor ablation have been addressed: 1) establishing the spatial transformation of patient's tumor with respect to a global map of the patient using a minimum number of intra-operative images and 2) optimal treatment planning for large tumors. Statistical atlas is employed to construct the global reference map. The atlas is deformably registered to a pair of intra-operative fluoroscopy images, constructing a patient-specific model, in order to reduce the radiation exposure to the sensitive patients such as pregnant and infants. The optimal treatment planning system incorporates clinical constraints on ablations and trajectories using a multiple objective optimization, which obtains optimal trajectory planning and ablation coverage using integer programming. The proposed system is presented and validated by experiments. © 2012 IEEE.published_or_final_versio

    2D-3D registration of CT vertebra volume to fluoroscopy projection: A calibration model assessment (doi:10.1155/2010/806094)

    Get PDF
    This study extends a previous research concerning intervertebral motion registration by means of 2D dynamic fluoroscopy to obtain a more comprehensive 3D description of vertebral kinematics. The problem of estimating the 3D rigid pose of a CT volume of a vertebra from its 2D X-ray fluoroscopy projection is addressed. 2D-3D registration is obtained maximising a measure of similarity between Digitally Reconstructed Radiographs (obtained from the CT volume) and real fluoroscopic projection. X-ray energy correction was performed. To assess the method a calibration model was realised a sheep dry vertebra was rigidly fixed to a frame of reference including metallic markers. Accurate measurement of 3D orientation was obtained via single-camera calibration of the markers and held as true 3D vertebra position; then, vertebra 3D pose was estimated and results compared. Error analysis revealed accuracy of the order of 0.1 degree for the rotation angles of about 1?mm for displacements parallel to the fluoroscopic plane, and of order of 10?mm for the orthogonal displacement.<br/

    Automatic image analysis of C-arm Computed Tomography images for ankle joint surgeries

    Get PDF
    Open reduction and internal fixation is a standard procedure in ankle surgery for treating a fractured fibula. Since fibula fractures are often accompanied by an injury of the syndesmosis complex, it is essential to restore the correct relative pose of the fibula relative to the adjoining tibia for the ligaments to heal. Otherwise, the patient might experience instability of the ankle leading to arthritis and ankle pain and ultimately revision surgery. Incorrect positioning referred to as malreduction of the fibula is assumed to be one of the major causes of unsuccessful ankle surgery. 3D C-arm imaging is the current standard procedure for revealing malreduction of fractures in the operating room. However, intra-operative visual inspection of the reduction result is complicated due to high inter-individual variation of the ankle anatomy and rather based on the subjective experience of the surgeon. A contralateral side comparison with the patient’s uninjured ankle is recommended but has not been integrated into clinical routine due to the high level of radiation exposure it incurs. This thesis presents the first approach towards a computer-assisted intra-operative contralateral side comparison of the ankle joint. The focus of this thesis was the design, development and validation of a software-based prototype for a fully automatic intra-operative assistance system for orthopedic surgeons. The implementation does not require an additional 3D C-arm scan of the uninjured ankle, thus reducing time consumption and cumulative radiation dose. A 3D statistical shape model (SSM) is used to reconstruct a 3D surface model from three 2D fluoroscopic projections representing the uninjured ankle. To this end, a 3D SSM segmentation is performed on the 3D image of the injured ankle to gain prior knowledge of the ankle. A 3D convolutional neural network (CNN) based initialization method was developed and its outcome was incorporated into the SSM adaption step. Segmentation quality was shown to be improved in terms of accuracy and robustness compared to the pure intensity-based SSM. This allows us to overcome the limitations of the previously proposed methods, namely inaccuracy due to metal artifacts and the lack of device-to-patient orientation of the C-arm. A 2D-CNN is employed to extract semantic knowledge from all fluoroscopic projection images. This step of the pipeline both creates features for the subsequent reconstruction and also helps to pre-initialize the 3D-SSM without user interaction. A 2D-3D multi-bone reconstruction method has been developed which uses distance maps of the 2D features for fast and accurate correspondence optimization and SSM adaption. This is the central and most crucial component of the workflow. This is the first time that a bone reconstruction method has been applied to the complex ankle joint and the first reconstruction method using CNN based segmentations as features. The reconstructed 3D-SSM of the uninjured ankle can be back-projected and visualized in a workflow-oriented manner to procure clear visualization of the region of interest, which is essential for the evaluation of the reduction result. The surgeon can thus directly compare an overlay of the contralateral ankle with the injured ankle. The developed methods were evaluated individually using data sets acquired during a cadaver study and representative clinical data acquired during fibular reduction. A hierarchical evaluation was designed to assess the inaccuracies of the system on different levels and to identify major sources of error. The overall evaluation performed on eleven challenging clinical datasets acquired for manual contralateral side comparison showed that the system is capable of accurately reconstructing 3D surface models of the uninjured ankle solely using three projection images. A mean Hausdorff distance of 1.72 mm was measured when comparing the reconstruction result to the ground truth segmentation and almost achieved the high required clinical accuracy of 1-2 mm. The overall error of the pipeline was mainly attributed to inaccuracies in the 2D-CNN segmentation. The consistency of these results requires further validation on a larger dataset. The workflow proposed in this thesis establishes the first approach to enable automatic computer-assisted contralateral side comparison in ankle surgery. The feasibility of the proposed approach was proven on a limited amount of clinical cases and has already yielded good results. The next important step is to alleviate the identified bottlenecks in the approach by providing more training data in order to further improve the accuracy. In conclusion, the new approach presented gives the chance to guide the surgeon during the reduction process, improve the surgical outcome while avoiding additional radiation exposure and reduce the number of revision surgeries in the long term

    3D approximation of scapula bone shape from 2D X-ray images using landmark-constrained statistical shape model fitting

    Get PDF
    Two-dimensional X-ray imaging is the dominant imaging modality in low-resource countries despite the existence of three-dimensional (3D) imaging modalities. This is because fewer hospitals in low-resource countries can afford the 3D imaging systems as their acquisition and operation costs are higher. However, 3D images are desirable in a range of clinical applications, for example surgical planning. The aim of this research was to develop a tool for 3D approximation of scapula bone from 2D X-ray images using landmark-constrained statistical shape model fitting. First, X-ray stereophotogrammetry was used to reconstruct the 3D coordinates of points located on 2D X-ray images of the scapula, acquired from two perspectives. A suitable calibration frame was used to map the image coordinates to their corresponding 3D realworld coordinates. The 3D point localization yielded average errors of (0.14, 0.07, 0.04) mm in the X, Y and Z coordinates respectively, and an absolute reconstruction error of 0.19 mm. The second phase assessed the reproducibility of the scapula landmarks reported by Ohl et al. (2010) and Borotikar et al. (2015). Only three (the inferior angle, acromion and the coracoid process) of the eight reproducible landmarks considered were selected as these were identifiable from the two different perspectives required for X-ray stereophotogrammetry in this project. For the last phase, an approximation of a scapula was produced with the aid of a statistical shape model (SSM) built from a training dataset of 84 CT scapulae. This involved constraining an SSM to the 3D reconstructed coordinates of the selected reproducible landmarks from 2D X-ray images. Comparison of the approximate model with a CT-derived ground truth 3D segmented volume resulted in surface-to-surface average distances of 4.28 mm and 3.20 mm, using three and sixteen landmarks respectively. Hence, increasing the number of landmarks produces a posterior model that makes better predictions of patientspecific reconstructions. An average Euclidean distance of 1.35 mm was obtained between the three selected landmarks on the approximation and the corresponding landmarks on the CT image. Conversely, a Euclidean distance of 5.99 mm was obtained between the three selected landmarks on the original SSM and corresponding landmarks on the CT image. The Euclidean distances confirm that a posterior model moves closer to the CT image, hence it reduces the search space for a more exact patient-specific 3D reconstruction by other fitting algorithms

    Reconstruction of Patient-Specific Bone Models from X-Ray Radiography

    Get PDF
    The availability of a patient‐specific bone model has become an increasingly invaluable addition to orthopedic case evaluation and planning [1]. Utilized within a wide range of specialized visualization and analysis tools, such models provide unprecedented wealth of bone shape information previously unattainable using traditional radiographic imaging [2]. In this work, a novel bone reconstruction method from two or more x‐ray images is described. This method is superior to previous attempts in terms of accuracy and repeatability. The new technique accurately models the radiological scene in a way that eliminates the need for expensive multi‐planar radiographic imaging systems. It is also flexible enough to allow for both short and long film imaging using standard radiological protocols, which makes the technology easily utilized in standard clinical setups

    A Survey on Deep Learning in Medical Image Registration: New Technologies, Uncertainty, Evaluation Metrics, and Beyond

    Full text link
    Over the past decade, deep learning technologies have greatly advanced the field of medical image registration. The initial developments, such as ResNet-based and U-Net-based networks, laid the groundwork for deep learning-driven image registration. Subsequent progress has been made in various aspects of deep learning-based registration, including similarity measures, deformation regularizations, and uncertainty estimation. These advancements have not only enriched the field of deformable image registration but have also facilitated its application in a wide range of tasks, including atlas construction, multi-atlas segmentation, motion estimation, and 2D-3D registration. In this paper, we present a comprehensive overview of the most recent advancements in deep learning-based image registration. We begin with a concise introduction to the core concepts of deep learning-based image registration. Then, we delve into innovative network architectures, loss functions specific to registration, and methods for estimating registration uncertainty. Additionally, this paper explores appropriate evaluation metrics for assessing the performance of deep learning models in registration tasks. Finally, we highlight the practical applications of these novel techniques in medical imaging and discuss the future prospects of deep learning-based image registration

    Detection and 3D Localization of Surgical Instruments for Image-Guided Surgery

    Get PDF
    Placement of surgical instrumentation in pelvic trauma surgery is challenged by complex anatomy and narrow bone corridors, relying on intraoperative x-ray fluoroscopy for visualization and guidance. The rapid workflow and cost constraints of orthopaedic trauma surgery have largely prohibited widespread adoption of 3D surgical navigation. This thesis reports the development and evaluation of a method to achieve 3D guidance via automatic detection and localization of surgical instruments (specifically, Kirschner wires [K-wires]) in fluoroscopic images acquired within routine workflow. The detection method uses a neural network (Mask R-CNN) for segmentation and keypoint detection of K-wires in fluoroscopy, and correspondence of keypoints among multiple images is established by 3D backprojection and a rank-ordering of ray intersections. The accuracy of 3D K-wire localization was evaluated in a laboratory cadaver study as well as patient images drawn from an IRB-approved clinical study. The detection network successfully generalized from simulated training and validation images to cadaver and clinical images, achieving 87% recall and 98% precision. The geometric accuracy of K-wire tip location and direction in 2D fluoroscopy was 1.9 ± 1.6 mm and 1.8° ± 1.3°, respectively. Simulation studies demonstrated a corresponding mean error of 1.1 mm in 3D tip location and 2.3° in 3D direction. Cadaver and clinical studies demonstrated the feasibility of the approach in real data, although accuracy was reduced to with 1.7 ± 0.7 mm in 3D tip location and 6° ± 2° in 3D direction. Future studies aim to improve performance by increasing the volume and variety of images used in neural network training, particularly with respect to low-dose fluoroscopy (high noise levels) and complex fluoroscopic scenes with various types surgical instrumentation. Because the approach involves fast runtime and uses equipment (a mobile C-arm) and fluoroscopic images that are common in standard workflow, it may be suitable to broad utilization in orthopaedic trauma surgery
    corecore