221 research outputs found

    Mixed Reality for Orthopedic Elbow Surgery Training and Operating Room Applications: A Preliminary Analysis

    Get PDF
    The use of Mixed Reality in medicine is widely documented to be a candidate to revolutionize surgical interventions. In this paper we present a system to simulate k-wire placement, that is a common orthopedic procedure used to stabilize fractures, dislocations, and other traumatic injuries. With the described system, it is possible to leverage Mixed Reality (MR) and advanced visualization techniques applied on a surgical simulation phantom to enhance surgical training and critical orthopedic surgical procedures. This analysis is centered on evaluating the precision and proficiency of k-wire placement in an elbow surgical phantom, designed with a 3D modeling software starting from a virtual 3D anatomical reference. By visually superimposing 3D reconstructions of internal structures and the target K-wire positioning on the physical model, it is expected not only to improve the learning curve but also to establish a foundation for potential real-time surgical guidance in challenging clinical scenarios. The performance is measured as the difference between K-wires real placement in respect to target position; the quantitative measurements are then used to compare the risk of iatrogenic injury to nerves and vascular structures of MR- guided vs non MR-guided simulated interventions

    Augmented reality navigation for spinal pedicle screw instrumentation using intraoperative 3D imaging

    Get PDF
    BACKGROUND CONTEXT Due to recent developments in augmented reality with head-mounted devices, holograms of a surgical plan can be displayed directly in the surgeon's field of view. To the best of our knowledge, three dimensional (3D) intraoperative fluoroscopy has not been explored for the use with holographic navigation by head-mounted devices in spine surgery. PURPOSE To evaluate the surgical accuracy of holographic pedicle screw navigation by head-mounted device using 3D intraoperative fluoroscopy. STUDY DESIGN In this experimental cadaver study, the accuracy of surgical navigation using a head-mounted device was compared with navigation with a state-of-the-art pose-tracking system. METHODS Three lumbar cadaver spines were embedded in nontransparent agar gel, leaving only commonly visible anatomy in sight. Intraoperative registration of preoperative planning was achieved by 3D fluoroscopy and fiducial markers attached to lumbar vertebrae. Trackable custom-made drill sleeve guides enabled real-time navigation. In total, 20 K-wires were navigated into lumbar pedicles using AR-navigation, 10 K-wires by the state-of-the-art pose-tracking system. 3D models obtained from postexperimental CT scans were used to measure surgical accuracy. MF is the founder and shareholder of Incremed AG, a Balgrist University Hospital start-up focusing on the development of innovative techniques for surgical executions. The other authors declare no conflict of interest concerning the contents of this study. No external funding was received for this study. RESULTS No significant difference in accuracy was measured between AR-navigated drillings and the gold standard with pose-tracking system with mean translational errors between entry points (3D vector distance; p=.85) of 3.4±1.6 mm compared with 3.2±2.0 mm, and mean angular errors between trajectories (3D angle; p=.30) of 4.3°±2.3° compared with 3.5°±1.4°. CONCLUSIONS In conclusion, holographic navigation by use of a head-mounted device achieve accuracy comparable to the gold standard of high-end pose-tracking systems. CLINICAL SIGNIFICANCE These promising results could result in a new way of surgical navigation with minimal infrastructural requirements but now have to be confirmed in clinical studies

    Augmented reality for computer assisted orthopaedic surgery

    Get PDF
    In recent years, computer-assistance and robotics have established their presence in operating theatres and found success in orthopaedic procedures. Benefits of computer assisted orthopaedic surgery (CAOS) have been thoroughly explored in research, finding improvements in clinical outcomes, through increased control and precision over surgical actions. However, human-computer interaction in CAOS remains an evolving field, through emerging display technologies including augmented reality (AR) – a fused view of the real environment with virtual, computer-generated holograms. Interactions between clinicians and patient-specific data generated during CAOS are limited to basic 2D interactions on touchscreen monitors, potentially creating clutter and cognitive challenges in surgery. Work described in this thesis sought to explore the benefits of AR in CAOS through: an integration between commercially available AR and CAOS systems, creating a novel AR-centric surgical workflow to support various tasks of computer-assisted knee arthroplasty, and three pre–clinical studies exploring the impact of the new AR workflow on both existing and newly proposed quantitative and qualitative performance metrics. Early research focused on cloning the (2D) user-interface of an existing CAOS system onto a virtual AR screen and investigating any resulting impacts on usability and performance. An infrared-based registration system is also presented, describing a protocol for calibrating commercial AR headsets with optical trackers, calculating a spatial transformation between surgical and holographic coordinate frames. The main contribution of this thesis is a novel AR workflow designed to support computer-assisted patellofemoral arthroplasty. The reported workflow provided 3D in-situ holographic guidance for CAOS tasks including patient registration, pre-operative planning, and assisted-cutting. Pre-clinical experimental validation on a commercial system (NAVIO®, Smith & Nephew) for these contributions demonstrates encouraging early-stage results showing successful deployment of AR to CAOS systems, and promising indications that AR can enhance the clinician’s interactions in the future. The thesis concludes with a summary of achievements, corresponding limitations and future research opportunities.Open Acces

    Craniofacial Growth Series Volume 56

    Full text link
    https://deepblue.lib.umich.edu/bitstream/2027.42/153991/1/56th volume CF growth series FINAL 02262020.pdfDescription of 56th volume CF growth series FINAL 02262020.pdf : Proceedings of the 46th Annual Moyers Symposium and 44th Moyers Presymposiu

    Smart Mobile Augmented Reality For Orthodontics Teaching And Learning Environment

    Get PDF
    Orthodontic education, which currently employs a didactic and apprenticeship approach has been a stronghold of teacher-centric instruction. This approach is facing numerous pedagogical challenges that affect the knowledge delivery and indirectly the skill gain of students. The teaching methods affect the students at various levels leading to ineffective comprehension of underlying principles and techniques of orthodontic science. The orthodontic education which lacks a technology supported learning environment has resulted in a deficient environment for orthodontic students

    Augmented Reality and Artificial Intelligence in Image-Guided and Robot-Assisted Interventions

    Get PDF
    In minimally invasive orthopedic procedures, the surgeon places wires, screws, and surgical implants through the muscles and bony structures under image guidance. These interventions require alignment of the pre- and intra-operative patient data, the intra-operative scanner, surgical instruments, and the patient. Suboptimal interaction with patient data and challenges in mastering 3D anatomy based on ill-posed 2D interventional images are essential concerns in image-guided therapies. State of the art approaches often support the surgeon by using external navigation systems or ill-conditioned image-based registration methods that both have certain drawbacks. Augmented reality (AR) has been introduced in the operating rooms in the last decade; however, in image-guided interventions, it has often only been considered as a visualization device improving traditional workflows. Consequently, the technology is gaining minimum maturity that it requires to redefine new procedures, user interfaces, and interactions. This dissertation investigates the applications of AR, artificial intelligence, and robotics in interventional medicine. Our solutions were applied in a broad spectrum of problems for various tasks, namely improving imaging and acquisition, image computing and analytics for registration and image understanding, and enhancing the interventional visualization. The benefits of these approaches were also discovered in robot-assisted interventions. We revealed how exemplary workflows are redefined via AR by taking full advantage of head-mounted displays when entirely co-registered with the imaging systems and the environment at all times. The proposed AR landscape is enabled by co-localizing the users and the imaging devices via the operating room environment and exploiting all involved frustums to move spatial information between different bodies. The system's awareness of the geometric and physical characteristics of X-ray imaging allows the exploration of different human-machine interfaces. We also leveraged the principles governing image formation and combined it with deep learning and RGBD sensing to fuse images and reconstruct interventional data. We hope that our holistic approaches towards improving the interface of surgery and enhancing the usability of interventional imaging, not only augments the surgeon's capabilities but also augments the surgical team's experience in carrying out an effective intervention with reduced complications

    Augmented navigation

    Get PDF
    Spinal fixation procedures have the inherent risk of causing damage to vulnerable anatomical structures such as the spinal cord, nerve roots, and blood vessels. To prevent complications, several technological aids have been introduced. Surgical navigation is the most widely used, and guides the surgeon by providing the position of the surgical instruments and implants in relation to the patient anatomy based on radiographic images. Navigation can be extended by the addition of a robotic arm to replace the surgeon’s hand to increase accuracy. Another line of surgical aids is tissue sensing equipment, that recognizes different tissue types and provides a warning system built into surgical instruments. All these technologies are under continuous development and the optimal solution is yet to be found. The aim of this thesis was to study the use of Augmented Reality (AR), Virtual Reality (VR), Artificial Intelligence (AI), and tissue sensing technology in spinal navigation to improve precision and prevent surgical errors. The aim of Paper I was to develop and validate an algorithm for automatizing the intraoperative planning of pedicle screws. An AI algorithm for automatic segmentation of the spine, and screw path suggestion was developed and evaluated. In a clinical study of advanced deformity cases, the algorithm could provide correct suggestions for 86% of all pedicles—or 95%, when cases with extremely altered anatomy were excluded. Paper II evaluated the accuracy of pedicle screw placement using a novel augmented reality surgical navigation (ARSN) system, harboring the above-developed algorithm. Twenty consecutively enrolled patients, eligible for deformity correction surgery in the thoracolumbar region, were operated on using the ARSN system. In this cohort, we found a pedicle screw placement accuracy of 94%, as measured according to the Gertzbein grading scale. The primary goal of Paper III was to validate an extension of the ARSN system for placing pedicle screws using instrument tracking and VR. In a porcine cadaver model, it was demonstrated that VR instrument tracking could successfully be integrated with the ARSN system, resulting in pedicle devices placed within 1.7 ± 1.0 mm of the planed path. Paper IV examined the feasibility of a robot-guided system for semi-automated, minimally invasive, pedicle screw placement in a cadaveric model. Using the robotic arm, pedicle devices were placed within 0.94 ± 0.59 mm of the planned path. The use of a semi-automated surgical robot was feasible, providing a higher technical accuracy compared to non-robotic solutions. Paper V investigated the use of a tissue sensing technology, diffuse reflectance spectroscopy (DRS), for detecting the cortical bone boundary in vertebrae during pedicle screw insertions. The technology could accurately differentiate between cancellous and cortical bone and warn the surgeon before a cortical breach. Using machine learning models, the technology demonstrated a sensitivity of 98% [range: 94-100%] and a specificity of 98% [range: 91-100%]. In conclusion, several technological aids can be used to improve accuracy during spinal fixation procedures. In this thesis, the advantages of adding AR, VR, AI and tissue sensing technology to conventional navigation solutions were studied
    corecore