16,362 research outputs found

    Evaluation of a portable image overlay projector for the visualisation of surgical navigation data: phantom studies

    Get PDF
    Introduction: Presenting visual feedback for image-guided surgery on a monitor requires the surgeon to perform time-consuming comparisons and diversion of sight and attention away from the patient. Deficiencies in previously developed augmented reality systems for image-guided surgery have, however, prevented the general acceptance of any one technique as a viable alternative to monitor displays. This work presents an evaluation of the feasibility and versatility of a novel augmented reality approach for the visualisation of surgical planning and navigation data. The approach, which utilises a portable image overlay device, was evaluated during integration into existing surgical navigation systems and during application within simulated navigated surgery scenarios. Methods: A range of anatomical models, surgical planning data and guidance information taken from liver surgery, cranio-maxillofacial surgery, orthopaedic surgery and biopsy were displayed on patient-specific phantoms, directly on to the patient's skin and on to cadaver tissue. The feasibility of employing the proposed augmented reality visualisation approach in each of the four tested clinical applications was qualitatively assessed for usability, visibility, workspace, line of sight and obtrusiveness. Results: The visualisation approach was found to assist in spatial understanding and reduced the need for sight diversion throughout the simulated surgical procedures. The approach enabled structures to be identified and targeted quickly and intuitively. All validated augmented reality scenes were easily visible and were implemented with minimal overhead. The device showed sufficient workspace for each of the presented applications, and the approach was minimally intrusiveness to the surgical scene. Conclusion: The presented visualisation approach proved to be versatile and applicable to a range of image-guided surgery applications, overcoming many of the deficiencies of previously described AR approaches. The approach presents an initial step towards a widely accepted alternative to monitor displays for the visualisation of surgical navigation dat

    Augmented Reality Visualization for Image-Guided Surgery:A Validation Study Using a Three-Dimensional Printed Phantom

    Get PDF
    Background Oral and maxillofacial surgery currently relies on virtual surgery planning based on image data (CT, MM). Three-dimensional (3D) visualizations are typically used to plan and predict the outcome of complex surgical procedures. To translate the virtual surgical plan to the operating room, it is either converted into physical 3D-printed guides or directly translated using real-time navigation systems. Purpose This study aims to improve the translation of the virtual surgery plan to a surgical procedure, such as oncologic or trauma surgery, in terms of accuracy and speed. Here we report an augmented reality visualization technique for image-guided surgery. It describes how surgeons can visualize and interact with the virtual surgery plan and navigation data while in the operating room. The user friendliness and usability is objectified by a formal user study that compared our augmented reality assisted technique to the gold standard setup of a perioperative navigation system (Brainlab). Moreover, accuracy of typical navigation tasks as reaching landmarks and following trajectories is compared. Results Overall completion time of navigation tasks was 1.71 times faster using augmented reality (P = .034). Accuracy improved significantly using augmented reality (P < .001), for reaching physical landmarks a less strong correlation was found (P = .087). Although the participants were relatively unfamiliar with VR/AR (rated 2.25/5) and gesture-based interaction (rated 2/5), they reported that navigation tasks become easier to perform using augmented reality (difficulty Brainlab rated 3.25/5, HoloLens 2.4/5). Conclusion The proposed workflow can be used in a wide range of image-guided surgery procedures as an addition to existing verified image guidance systems. Results of this user study imply that our technique enables typical navigation tasks to be performed faster and more accurately compared to the current gold standard. In addition, qualitative feedback on our augmented reality assisted technique was more positive compared to the standard setup. (C) 2021 The Author. Published by Elsevier Inc. on behalf of The American Association of Oral and Maxillofacial Surgeons

    Proof of Concept: Wearable Augmented Reality Video See-Through Display for Neuro-Endoscopy

    Get PDF
    In mini-invasive surgery and in endoscopic procedures, the surgeon operates without a direct visualization of the patient’s anatomy. In image-guided surgery, solutions based on wearable augmented reality (AR) represent the most promising ones. The authors describe the characteristics that an ideal Head Mounted Display (HMD) must have to guarantee safety and accuracy in AR-guided neurosurgical interventions and design the ideal virtual content for guiding crucial task in neuro endoscopic surgery. The selected sequence of AR content to obtain an effective guidance during surgery is tested in a Microsoft Hololens based app

    Projected Augmented Reality to Drive Osteotomy Surgery: Implementation and Comparison With Video See-Through Technology

    Get PDF
    In recent years, the spreading of visual augmented reality as an effective tool in image-guided surgery, has stimulated the research community to investigate the use of commercial augmented reality headsets a broad range of potential applications. This aroused enthusiasm among clinicians for the potential of augmented reality, but also revealed some technological and human-factor limitations that still hinder its routine adoption in the operating room. In this work, we propose an alternative to head-mounted displays, based on projected augmented reality. Projected augmented reality completely preserves the surgeon’s natural view of the operating field, because it requires no perspective conversion and/or optical mediation. We selected a cranio-maxillofacial surgery application as a benchmark to test the proposed system and compare its accuracy with the one obtained with a video see-through system. The augmented reality overlay accuracy was evaluated by measuring the distance between a virtual osteotomy line and its real counterpart. The experimental tests showed that the accuracy of the two augmented reality modes is similar, with a median error discrepancy of about 0.3 mm for the projected augmented reality mode. Results suggest that projected augmented reality can be a valuable alternative to standard see-through head-mounted displays to support in-situ visualization of medical imaging data as surgical guidance

    Utilizing image guided surgery for user interaction in medical augmented reality

    Get PDF
    The graphical overlay of additional medical information over the patient during a surgical procedure has long been considered one of the most promising applications of augmented reality. While many experimental systems for augmented reality in medicine have reached an advanced state and can deliver high-quality augmented video streams, they usually depend heavily on specialized dedicated hardware. Such dedicated system components, which originally have been designed for engineering applications or VR research, often are ill-suited for use in the clinical practice. We have described a novel medical augmented reality application, which is based almost exclusively on existing, commercially available, and certified medical equipment. In our system, a so-called image guided surgery device is used for tracking a webcam, which delivers the digital video stream of the physical scene that is augmented with the virtual information. In this paper, we show how the capability of the image guided surgery system for tracking surgical instruments can be harnessed for user interaction. Our method enables the user to define points and freely drawn shapes in 3-d and provides selectable menu items, which can be located in immediate proximity to the patient. This eliminates the need for conventional touchscreen- or mouse-based user interaction without requiring additional dedicated hardware like dedicated tracking systems or specialized 3-d input devices. Thus the surgeon can directly interact with the system, without the help of additional personnel. We demonstrate our new input method with an application for creating operation plan sketches directly on the patient in an augmented view

    In vivo estimation of target registration errors during augmented reality laparoscopic surgery

    Get PDF
    PURPOSE: Successful use of augmented reality for laparoscopic surgery requires that the surgeon has a thorough understanding of the likely accuracy of any overlay. Whilst the accuracy of such systems can be estimated in the laboratory, it is difficult to extend such methods to the in vivo clinical setting. Herein we describe a novel method that enables the surgeon to estimate in vivo errors during use. We show that the method enables quantitative evaluation of in vivo data gathered with the SmartLiver image guidance system. METHODS: The SmartLiver system utilises an intuitive display to enable the surgeon to compare the positions of landmarks visible in both a projected model and in the live video stream. From this the surgeon can estimate the system accuracy when using the system to locate subsurface targets not visible in the live video. Visible landmarks may be either point or line features. We test the validity of the algorithm using an anatomically representative liver phantom, applying simulated perturbations to achieve clinically realistic overlay errors. We then apply the algorithm to in vivo data. RESULTS: The phantom results show that using projected errors of surface features provides a reliable predictor of subsurface target registration error for a representative human liver shape. Applying the algorithm to in vivo data gathered with the SmartLiver image-guided surgery system shows that the system is capable of accuracies around 12 mm; however, achieving this reliably remains a significant challenge. CONCLUSION: We present an in vivo quantitative evaluation of the SmartLiver image-guided surgery system, together with a validation of the evaluation algorithm. This is the first quantitative in vivo analysis of an augmented reality system for laparoscopic surgery

    Image-guided surgery and medical robotics in the cranial area

    Get PDF
    Surgery in the cranial area includes complex anatomic situations with high-risk structures and high demands for functional and aesthetic results. Conventional surgery requires that the surgeon transfers complex anatomic and surgical planning information, using spatial sense and experience. The surgical procedure depends entirely on the manual skills of the operator. The development of image-guided surgery provides new revolutionary opportunities by integrating presurgical 3D imaging and intraoperative manipulation. Augmented reality, mechatronic surgical tools, and medical robotics may continue to progress in surgical instrumentation, and ultimately, surgical care. The aim of this article is to review and discuss state-of-the-art surgical navigation and medical robotics, image-to-patient registration, aspects of accuracy, and clinical applications for surgery in the cranial area

    Augmented Reality-based Feedback for Technician-in-the-loop C-arm Repositioning

    Full text link
    Interventional C-arm imaging is crucial to percutaneous orthopedic procedures as it enables the surgeon to monitor the progress of surgery on the anatomy level. Minimally invasive interventions require repeated acquisition of X-ray images from different anatomical views to verify tool placement. Achieving and reproducing these views often comes at the cost of increased surgical time and radiation dose to both patient and staff. This work proposes a marker-free "technician-in-the-loop" Augmented Reality (AR) solution for C-arm repositioning. The X-ray technician operating the C-arm interventionally is equipped with a head-mounted display capable of recording desired C-arm poses in 3D via an integrated infrared sensor. For C-arm repositioning to a particular target view, the recorded C-arm pose is restored as a virtual object and visualized in an AR environment, serving as a perceptual reference for the technician. We conduct experiments in a setting simulating orthopedic trauma surgery. Our proof-of-principle findings indicate that the proposed system can decrease the 2.76 X-ray images required per desired view down to zero, suggesting substantial reductions of radiation dose during C-arm repositioning. The proposed AR solution is a first step towards facilitating communication between the surgeon and the surgical staff, improving the quality of surgical image acquisition, and enabling context-aware guidance for surgery rooms of the future. The concept of technician-in-the-loop design will become relevant to various interventions considering the expected advancements of sensing and wearable computing in the near future

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions
    corecore