11 research outputs found

    Interventional 2D/3D Registration with Contextual Pose Update

    Get PDF
    Traditional intensity-based 2D/3D registration requires near-perfect initialization in order for image similarity metrics to yield meaningful gradient updates of X-ray pose. They depend on image appearance rather than content, and therefore, fail in revealing large pose offsets that substantially alter the appearance of the same structure. We complement traditional similarity metrics with a convolutional neural network-based (CNN-based) similarity function that captures large-range pose relations by extracting both local and contextual information, and proposes meaningful X-ray pose updates without the need for accurate initialization. Our CNN accepts a target X-ray image and a digitally reconstructed radiograph at the current pose estimate as input and iteratively outputs pose updates on the Riemannian Manifold. It integrates seamlessly with conventional image-based registration frameworks. Long-range relations are captured primarily by our CNN-based method while short-range offsets can be recovered accurately with an image similarity-based method. On both synthetic and real X-ray images of the pelvis, we demonstrate that the proposed method can successfully recover large rotational and translational offsets, irrespective of initialization

    EPOS 34th Annual Meeting

    Get PDF

    Fluoroscopic Navigation for Robot-Assisted Orthopedic Surgery

    Get PDF
    Robot-assisted orthopedic surgery has gained increasing attention due to its improved accuracy and stability in minimally-invasive interventions compared to a surgeon's manual operation. An effective navigation system is critical, which estimates the intra-operative tool-to-tissue pose relationship to guide the robotic surgical device. However, most existing navigation systems use fiducial markers, such as bone pin markers, to close the calibration loop, which requires a clear line of sight and is not ideal for patients. This dissertation presents fiducial-free, fluoroscopic image-based navigation pipelines for three robot-assisted orthopedic applications: femoroplasty, core decompression of the hip, and transforaminal lumbar epidural injections. We propose custom-designed image intensity-based 2D/3D registration algorithms for pose estimation of bone anatomies, including femur and spine, and pose estimation of a rigid surgical tool and a flexible continuum manipulator. We performed system calibration and integration into a surgical robotic platform. We validated the navigation system's performance in comprehensive simulation and ex vivo cadaveric experiments. Our results suggest the feasibility of applying our proposed navigation methods for robot-assisted orthopedic applications. We also investigated machine learning approaches that can benefit the medical imaging analysis, automate the navigation component or address the registration challenges. We present a synthetic X-ray data generation pipeline called SyntheX, which enables large-scale machine learning model training. SyntheX was used to train feature detection tasks of the pelvis anatomy and the continuum manipulator, which were used to initialize the registration pipelines. Last but not least, we propose a projective spatial transformer module that learns a convex shape similarity function and extends the registration capture range. We believe that our image-based navigation solutions can benefit and inspire related orthopedic robot-assisted system designs and eventually be used in the operating rooms to improve patient outcomes

    ADVANCED MOTION MODELS FOR RIGID AND DEFORMABLE REGISTRATION IN IMAGE-GUIDED INTERVENTIONS

    Get PDF
    Image-guided surgery (IGS) has been a major area of interest in recent decades that continues to transform surgical interventions and enable safer, less invasive procedures. In the preoperative contexts, diagnostic imaging, including computed tomography (CT) and magnetic resonance (MR) imaging, offers a basis for surgical planning (e.g., definition of target, adjacent anatomy, and the surgical path or trajectory to the target). At the intraoperative stage, such preoperative images and the associated planning information are registered to intraoperative coordinates via a navigation system to enable visualization of (tracked) instrumentation relative to preoperative images. A major limitation to such an approach is that motions during surgery, either rigid motions of bones manipulated during orthopaedic surgery or brain soft-tissue deformation in neurosurgery, are not captured, diminishing the accuracy of navigation systems. This dissertation seeks to use intraoperative images (e.g., x-ray fluoroscopy and cone-beam CT) to provide more up-to-date anatomical context that properly reflects the state of the patient during interventions to improve the performance of IGS. Advanced motion models for inter-modality image registration are developed to improve the accuracy of both preoperative planning and intraoperative guidance for applications in orthopaedic pelvic trauma surgery and minimally invasive intracranial neurosurgery. Image registration algorithms are developed with increasing complexity of motion that can be accommodated (single-body rigid, multi-body rigid, and deformable) and increasing complexity of registration models (statistical models, physics-based models, and deep learning-based models). For orthopaedic pelvic trauma surgery, the dissertation includes work encompassing: (i) a series of statistical models to model shape and pose variations of one or more pelvic bones and an atlas of trajectory annotations; (ii) frameworks for automatic segmentation via registration of the statistical models to preoperative CT and planning of fixation trajectories and dislocation / fracture reduction; and (iii) 3D-2D guidance using intraoperative fluoroscopy. For intracranial neurosurgery, the dissertation includes three inter-modality deformable registrations using physic-based Demons and deep learning models for CT-guided and CBCT-guided procedures

    Augmented Reality and Artificial Intelligence in Image-Guided and Robot-Assisted Interventions

    Get PDF
    In minimally invasive orthopedic procedures, the surgeon places wires, screws, and surgical implants through the muscles and bony structures under image guidance. These interventions require alignment of the pre- and intra-operative patient data, the intra-operative scanner, surgical instruments, and the patient. Suboptimal interaction with patient data and challenges in mastering 3D anatomy based on ill-posed 2D interventional images are essential concerns in image-guided therapies. State of the art approaches often support the surgeon by using external navigation systems or ill-conditioned image-based registration methods that both have certain drawbacks. Augmented reality (AR) has been introduced in the operating rooms in the last decade; however, in image-guided interventions, it has often only been considered as a visualization device improving traditional workflows. Consequently, the technology is gaining minimum maturity that it requires to redefine new procedures, user interfaces, and interactions. This dissertation investigates the applications of AR, artificial intelligence, and robotics in interventional medicine. Our solutions were applied in a broad spectrum of problems for various tasks, namely improving imaging and acquisition, image computing and analytics for registration and image understanding, and enhancing the interventional visualization. The benefits of these approaches were also discovered in robot-assisted interventions. We revealed how exemplary workflows are redefined via AR by taking full advantage of head-mounted displays when entirely co-registered with the imaging systems and the environment at all times. The proposed AR landscape is enabled by co-localizing the users and the imaging devices via the operating room environment and exploiting all involved frustums to move spatial information between different bodies. The system's awareness of the geometric and physical characteristics of X-ray imaging allows the exploration of different human-machine interfaces. We also leveraged the principles governing image formation and combined it with deep learning and RGBD sensing to fuse images and reconstruct interventional data. We hope that our holistic approaches towards improving the interface of surgery and enhancing the usability of interventional imaging, not only augments the surgeon's capabilities but also augments the surgical team's experience in carrying out an effective intervention with reduced complications

    Failure Analysis of Biometals

    Get PDF
    Metallic biomaterials (biometals) are widely used for the manufacture of medical implants, ranging from load-bearing orthopaedic prostheses to dental and cardiovascular implants, because of their favourable combination of properties, including high strength, fracture toughness, biocompatibility, and wear and corrosion resistance. Owing to the significant consequences of implant material failure/degradation, in terms of both personal and financial burden, failure analysis of biometals has always been of paramount importance in order to understand the failure mechanisms and implement suitable solutions with the aim to improve the longevity of implants in the body. Failure Analysis of Biometals presents some of the latest developments and findings in this area. This includes a great range of common metallic biomaterials (Ti alloys, CoCrMo alloys, Mg alloys, and NiTi alloys) and their associated failure mechanisms (corrosion, fatigue, fracture, and fretting wear) that commonly occur in medical implants and surgical instruments
    corecore