8 research outputs found

    Pivot calibration concept for sensor attached mobile c-arms

    Full text link
    Medical augmented reality has been actively studied for decades and many methods have been proposed torevolutionize clinical procedures. One example is the camera augmented mobile C-arm (CAMC), which providesa real-time video augmentation onto medical images by rigidly mounting and calibrating a camera to the imagingdevice. Since then, several CAMC variations have been suggested by calibrating 2D/3D cameras, trackers, andmore recently a Microsoft HoloLens to the C-arm. Different calibration methods have been applied to establishthe correspondence between the rigidly attached sensor and the imaging device. A crucial step for these methodsis the acquisition of X-Ray images or 3D reconstruction volumes; therefore, requiring the emission of ionizingradiation. In this work, we analyze the mechanical motion of the device and propose an alternatative methodto calibrate sensors to the C-arm without emitting any radiation. Given a sensor is rigidly attached to thedevice, we introduce an extended pivot calibration concept to compute the fixed translation from the sensor tothe C-arm rotation center. The fixed relationship between the sensor and rotation center can be formulated as apivot calibration problem with the pivot point moving on a locus. Our method exploits the rigid C-arm motiondescribing a Torus surface to solve this calibration problem. We explain the geometry of the C-arm motion andits relation to the attached sensor, propose a calibration algorithm and show its robustness against noise, as wellas trajectory and observed pose density by computer simulations. We discuss this geometric-based formulationand its potential extensions to different C-arm applications.Comment: Accepted for Image-Guided Procedures, Robotic Interventions, and Modeling 2020, Houston, TX, US

    Calibration of Optical See-Through Head Mounted Display with Mobile C-arm for Visualization of Cone Beam CT Data

    Get PDF
    This work proposes the visualization of Cone-Beam Computed Tomography (CBCT) volumes in situ via the Microsoft Hololens, an optical see-through Head-Mounted Display (HMD). The data is visualized in the display as virtual objects, or holograms. Such holograms allow the CBCT volume to be overlaid on the patient at the site where the data was acquired. This is useful in orthopedic surgeries due to the nature of the CBCT scans of interest. A known visual marker is tracked using both the Hololens and the RGBD (Red Green Blue Depth) optical camera rigidly mounted and calibrated to the C-arm. By combining the transformations, the calibration defines the spatial relationship between the HMD and the C-arm. The calibration process and visualization have been confirmed and enable advanced visualization of CBCT data for orthopedic surgery. The hardware capabilities of the Hololens currently limit the quality of the volume rendered and the precision of the calibration. However, future improvements to each component have been identified and with the ever-improving technology of HMDs, this limitation is only a temporary barrier to clinical usage

    Der neue 3D-RGB-D-Camera Augmented Mobile C-arm

    Get PDF

    Der neue 3D-RGB-D-Camera Augmented Mobile C-arm

    Get PDF

    Validazione di un dispositivo indossabile basato sulla realta aumentata per il riposizionamento del mascellare superiore

    Get PDF
    Aim: We present a newly designed, localiser-free, head-mounted system featuring augmented reality (AR) as an aid to maxillofacial bone surgery, and assess the potential utility of the device by conducting a feasibility study and validation. Also, we implement a novel and ergonomic strategy designed to present AR information to the operating surgeon (hPnP). Methods: The head-mounted wearable system was developed as a stand- alone, video-based, see-through device in which the visual features were adapted to facilitate maxillofacial bone surgery. The system is designed to exhibit virtual planning overlaying the details of a real patient. We implemented a method allowing performance of waferless, AR-assisted maxillary repositioning. In vitro testing was conducted on a physical replica of a human skull. Surgical accuracy was measured. The outcomes were compared with those expected to be achievable in a three-dimensional environment. Data were derived using three levels of surgical planning, of increasing complexity, and for nine different operators with varying levels of surgical skill. Results: The mean linear error was 1.70±0.51mm. The axial errors were 0.89±0.54mm on the sagittal axis, 0.60±0.20mm on the frontal axis, and 1.06±0.40mm on the craniocaudal axis. Mean angular errors were also computed. Pitch: 3.13°±1.89°; Roll: 1.99°±0.95°; Yaw: 3.25°±2.26°. No significant difference in terms of error was noticed among operators, despite variations in surgical experience. Feedback from surgeons was acceptable; all tests were completed within 15 min and the tool was considered to be both comfortable and usable in practice. Conclusion: Our device appears to be accurate when used to assist in waferless maxillary repositioning. Our results suggest that the method can potentially be extended for use with many surgical procedures on the facial skeleton. Further, it would be appropriate to proceed to in vivo testing to assess surgical accuracy under real clinical conditions.Obiettivo: Presentare un nuovo sistema indossabile, privo di sistema di tracciamento esterno, che utilizzi la realtà aumentata come ausilio alla chirurgia ossea maxillo-facciale. Abbiamo validato il dispositivo. Inoltre, abbiamo implementato un nuovo metodo per presentare le informazioni aumentate al chirurgo (hPnP). Metodi: Le caratteristiche di visualizzazione del sistema, basato sul paradigma video see-through, sono state sviluppate specificamente per la chirurgia ossea maxillo-facciale. Il dispositivo è progettato per mostrare la pianificazione virtuale della chirurgia sovrapponendola all’anatomia del paziente. Abbiamo implementato un metodo che consente una tecnica senza splint, basata sulla realtà aumentata, per il riposizionamento del mascellare superiore. Il test in vitro è stato condotto su una replica di un cranio umano. La precisione chirurgica è stata misurata confrontando i risultati reali con quelli attesi. Il test è stato condotto utilizzando tre pianificazioni chirurgiche di crescente complessità, per nove operatori con diversi livelli di abilità chirurgica. Risultati: L'errore lineare medio è stato di 1,70±0,51mm. Gli errori assiali erano: 0,89±0,54mm sull'asse sagittale, 0,60±0,20mm sull'asse frontale, e 1,06±0,40mm sull'asse craniocaudale. Anche gli errori angolari medi sono stati calcolati. Beccheggio: 3.13°±1,89°; Rollio: 1,99°±0,95°; Imbardata: 3.25°±2,26°. Nessuna differenza significativa in termini di errore è stata rilevata tra gli operatori. Il feedback dei chirurghi è stato soddisfacente; tutti i test sono stati completati entro 15 minuti e lo strumento è stato considerato comodo e utilizzabile nella pratica. Conclusione: Il nostro dispositivo sembra essersi dimostrato preciso se utilizzato per eseguire il riposizionamento del mascellare superiore senza splint. I nostri risultati suggeriscono che il metodo può potenzialmente essere esteso ad altre procedure chirurgiche sullo scheletro facciale. Inoltre, appare utile procedere ai test in vivo per valutare la precisione chirurgica in condizioni cliniche reali

    Augmented Reality and Artificial Intelligence in Image-Guided and Robot-Assisted Interventions

    Get PDF
    In minimally invasive orthopedic procedures, the surgeon places wires, screws, and surgical implants through the muscles and bony structures under image guidance. These interventions require alignment of the pre- and intra-operative patient data, the intra-operative scanner, surgical instruments, and the patient. Suboptimal interaction with patient data and challenges in mastering 3D anatomy based on ill-posed 2D interventional images are essential concerns in image-guided therapies. State of the art approaches often support the surgeon by using external navigation systems or ill-conditioned image-based registration methods that both have certain drawbacks. Augmented reality (AR) has been introduced in the operating rooms in the last decade; however, in image-guided interventions, it has often only been considered as a visualization device improving traditional workflows. Consequently, the technology is gaining minimum maturity that it requires to redefine new procedures, user interfaces, and interactions. This dissertation investigates the applications of AR, artificial intelligence, and robotics in interventional medicine. Our solutions were applied in a broad spectrum of problems for various tasks, namely improving imaging and acquisition, image computing and analytics for registration and image understanding, and enhancing the interventional visualization. The benefits of these approaches were also discovered in robot-assisted interventions. We revealed how exemplary workflows are redefined via AR by taking full advantage of head-mounted displays when entirely co-registered with the imaging systems and the environment at all times. The proposed AR landscape is enabled by co-localizing the users and the imaging devices via the operating room environment and exploiting all involved frustums to move spatial information between different bodies. The system's awareness of the geometric and physical characteristics of X-ray imaging allows the exploration of different human-machine interfaces. We also leveraged the principles governing image formation and combined it with deep learning and RGBD sensing to fuse images and reconstruct interventional data. We hope that our holistic approaches towards improving the interface of surgery and enhancing the usability of interventional imaging, not only augments the surgeon's capabilities but also augments the surgical team's experience in carrying out an effective intervention with reduced complications
    corecore