28 research outputs found

    Augmented Reality and Artificial Intelligence in Image-Guided and Robot-Assisted Interventions

    Get PDF
    In minimally invasive orthopedic procedures, the surgeon places wires, screws, and surgical implants through the muscles and bony structures under image guidance. These interventions require alignment of the pre- and intra-operative patient data, the intra-operative scanner, surgical instruments, and the patient. Suboptimal interaction with patient data and challenges in mastering 3D anatomy based on ill-posed 2D interventional images are essential concerns in image-guided therapies. State of the art approaches often support the surgeon by using external navigation systems or ill-conditioned image-based registration methods that both have certain drawbacks. Augmented reality (AR) has been introduced in the operating rooms in the last decade; however, in image-guided interventions, it has often only been considered as a visualization device improving traditional workflows. Consequently, the technology is gaining minimum maturity that it requires to redefine new procedures, user interfaces, and interactions. This dissertation investigates the applications of AR, artificial intelligence, and robotics in interventional medicine. Our solutions were applied in a broad spectrum of problems for various tasks, namely improving imaging and acquisition, image computing and analytics for registration and image understanding, and enhancing the interventional visualization. The benefits of these approaches were also discovered in robot-assisted interventions. We revealed how exemplary workflows are redefined via AR by taking full advantage of head-mounted displays when entirely co-registered with the imaging systems and the environment at all times. The proposed AR landscape is enabled by co-localizing the users and the imaging devices via the operating room environment and exploiting all involved frustums to move spatial information between different bodies. The system's awareness of the geometric and physical characteristics of X-ray imaging allows the exploration of different human-machine interfaces. We also leveraged the principles governing image formation and combined it with deep learning and RGBD sensing to fuse images and reconstruct interventional data. We hope that our holistic approaches towards improving the interface of surgery and enhancing the usability of interventional imaging, not only augments the surgeon's capabilities but also augments the surgical team's experience in carrying out an effective intervention with reduced complications

    Pivot calibration concept for sensor attached mobile c-arms

    Full text link
    Medical augmented reality has been actively studied for decades and many methods have been proposed torevolutionize clinical procedures. One example is the camera augmented mobile C-arm (CAMC), which providesa real-time video augmentation onto medical images by rigidly mounting and calibrating a camera to the imagingdevice. Since then, several CAMC variations have been suggested by calibrating 2D/3D cameras, trackers, andmore recently a Microsoft HoloLens to the C-arm. Different calibration methods have been applied to establishthe correspondence between the rigidly attached sensor and the imaging device. A crucial step for these methodsis the acquisition of X-Ray images or 3D reconstruction volumes; therefore, requiring the emission of ionizingradiation. In this work, we analyze the mechanical motion of the device and propose an alternatative methodto calibrate sensors to the C-arm without emitting any radiation. Given a sensor is rigidly attached to thedevice, we introduce an extended pivot calibration concept to compute the fixed translation from the sensor tothe C-arm rotation center. The fixed relationship between the sensor and rotation center can be formulated as apivot calibration problem with the pivot point moving on a locus. Our method exploits the rigid C-arm motiondescribing a Torus surface to solve this calibration problem. We explain the geometry of the C-arm motion andits relation to the attached sensor, propose a calibration algorithm and show its robustness against noise, as wellas trajectory and observed pose density by computer simulations. We discuss this geometric-based formulationand its potential extensions to different C-arm applications.Comment: Accepted for Image-Guided Procedures, Robotic Interventions, and Modeling 2020, Houston, TX, US

    Calibration of Optical See-Through Head Mounted Display with Mobile C-arm for Visualization of Cone Beam CT Data

    Get PDF
    This work proposes the visualization of Cone-Beam Computed Tomography (CBCT) volumes in situ via the Microsoft Hololens, an optical see-through Head-Mounted Display (HMD). The data is visualized in the display as virtual objects, or holograms. Such holograms allow the CBCT volume to be overlaid on the patient at the site where the data was acquired. This is useful in orthopedic surgeries due to the nature of the CBCT scans of interest. A known visual marker is tracked using both the Hololens and the RGBD (Red Green Blue Depth) optical camera rigidly mounted and calibrated to the C-arm. By combining the transformations, the calibration defines the spatial relationship between the HMD and the C-arm. The calibration process and visualization have been confirmed and enable advanced visualization of CBCT data for orthopedic surgery. The hardware capabilities of the Hololens currently limit the quality of the volume rendered and the precision of the calibration. However, future improvements to each component have been identified and with the ever-improving technology of HMDs, this limitation is only a temporary barrier to clinical usage

    Der neue 3D-RGB-D-Camera Augmented Mobile C-arm

    Get PDF

    Der neue 3D-RGB-D-Camera Augmented Mobile C-arm

    Get PDF

    Simulation Approaches to X-ray C-Arm-based Interventions

    Get PDF
    Mobile C-Arm systems have enabled interventional spine procedures, such as facet joint injections, to be performed minimally-invasively under X-ray or fluoroscopy guidance. The downside to these procedures is the radiation exposure the patient and medical staff are subject to, which can vary greatly depending on the procedure as well as the skill and experience of the team. Standard training methods for these procedures involve the use of a physical C-Arm with real X-rays training on either cadavers or via an apprenticeship-based program. Many guidance systems have been proposed in the literature which aim to reduce the amount of radiation exposure intraoperatively by supplementing the X-ray images with digitally reconstructed radiographs (DRRs). These systems have shown promising results in the lab but have proven difficult to integrate into the clinical workflow due to costly equipment, safety protocols, and difficulties in maintaining patient registration. Another approach for reducing the amount of radiation exposure is by providing better hands-on training for C-Arm positioning through a pre-operative simulator. Such simulators have been proposed in the literature but still require access to a physical C-Arm or costly tracking equipment. With the goal of providing hands-on, accessible training for C-Arm positioning tasks, we have developed a miniature 3D-printed C-Arm simulator using accelerometer-based tracking. The system is comprised of a software application to interface with the accelerometers and provide a real-time DRR display based on the position of the C-Arm source. We conducted a user study, consisting of control and experimental groups, to evaluate the efficacy of the system as a training tool. The experimental group achieved significantly lower procedure time and higher positioning accuracy than the control group. The system was evaluated positively for its use in medical education via a 5-pt likert scale questionnaire. C-Arm positioning tasks are associated with a highly visual learning-based nature due to the spatial mapping required from 2D fluoroscopic image to 3D C-Arm and patient. Due to the limited physical interaction required, this task is well suited for training in Virtual Reality (VR), eliminating the need for a physical C-Arm. To this end, we extended the system presented in chapter 2 to an entirely virtual-based approach. We implemented the system as a 3DSlicer module and conducted a pilot study for preliminary evaluation. The reception was overall positive, with users expressing enthusiasm towards training in VR, but also highlighting limitations and potential areas of improvement of the system
    corecore