621 research outputs found

    Design and clinical evaluation of an image-guided surgical microscope with an integrated tracking system

    Get PDF
    A new image-guided microscope system using augmented reality image overlays has been developed. With this system, CT cut-views and segmented objects such as tumors that have been previously extracted from preoperative tomographic images can be directly displayed as augmented reality overlays on the microscope image. The novelty of this design stems from the inclusion of a precise mini-tracker directly on the microscope. This device, which is rigidly mounted to the microscope, is used to track the movements of surgical tools and the patient. In addition to an accuracy gain, this setup offers improved ergonomics since it is much easier for the surgeon to keep an unobstructed line of sight to tracked objects. We describe the components of the system: microscope calibration, image registration, tracker assembly and registration, tool tracking, and augmented reality display. The accuracy of the system has been measured by validation on plastic skulls and cadaver heads, obtaining an overlay error of 0.7mm. In addition, a numerical simulation of the system has been done in order to complement the accuracy study, showing that the integration of the tracker onto the microscope could lead to an improvement of the accuracy to the order of 0.5mm. Finally, we describe our clinical experience using the system in the operation room, where three operations have been performed to dat

    Microscope Embedded Neurosurgical Training and Intraoperative System

    Get PDF
    In the recent years, neurosurgery has been strongly influenced by new technologies. Computer Aided Surgery (CAS) offers several benefits for patients\u27 safety but fine techniques targeted to obtain minimally invasive and traumatic treatments are required, since intra-operative false movements can be devastating, resulting in patients deaths. The precision of the surgical gesture is related both to accuracy of the available technological instruments and surgeon\u27s experience. In this frame, medical training is particularly important. From a technological point of view, the use of Virtual Reality (VR) for surgeon training and Augmented Reality (AR) for intra-operative treatments offer the best results. In addition, traditional techniques for training in surgery include the use of animals, phantoms and cadavers. The main limitation of these approaches is that live tissue has different properties from dead tissue and that animal anatomy is significantly different from the human. From the medical point of view, Low-Grade Gliomas (LGGs) are intrinsic brain tumours that typically occur in younger adults. The objective of related treatment is to remove as much of the tumour as possible while minimizing damage to the healthy brain. Pathological tissue may closely resemble normal brain parenchyma when looked at through the neurosurgical microscope. The tactile appreciation of the different consistency of the tumour compared to normal brain requires considerable experience on the part of the neurosurgeon and it is a vital point. The first part of this PhD thesis presents a system for realistic simulation (visual and haptic) of the spatula palpation of the LGG. This is the first prototype of a training system using VR, haptics and a real microscope for neurosurgery. This architecture can be also adapted for intra-operative purposes. In this instance, a surgeon needs the basic setup for the Image Guided Therapy (IGT) interventions: microscope, monitors and navigated surgical instruments. The same virtual environment can be AR rendered onto the microscope optics. The objective is to enhance the surgeon\u27s ability for a better intra-operative orientation by giving him a three-dimensional view and other information necessary for a safe navigation inside the patient. The last considerations have served as motivation for the second part of this work which has been devoted to improving a prototype of an AR stereoscopic microscope for neurosurgical interventions, developed in our institute in a previous work. A completely new software has been developed in order to reuse the microscope hardware, enhancing both rendering performances and usability. Since both AR and VR share the same platform, the system can be referred to as Mixed Reality System for neurosurgery. All the components are open source or at least based on a GPL license

    Mixed reality simulation of rasping procedure in artificial cervical disc replacement (ACDR) surgery

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Until quite recently spinal disorder problems in the U.S. have been operated by fusing cervical vertebrae instead of replacement of the cervical disc with an artificial disc. Cervical disc replacement is a recently approved procedure in the U.S. It is one of the most challenging surgical procedures in the medical field due to the deficiencies in available diagnostic tools and insufficient number of surgical practices For physicians and surgical instrument developers, it is critical to understand how to successfully deploy the new artificial disc replacement systems. Without proper understanding and practice of the deployment procedure, it is possible to injure the vertebral body. Mixed reality (MR) and virtual reality (VR) surgical simulators are becoming an indispensable part of physicians’ training, since they offer a risk free training environment. In this study, MR simulation framework and intricacies involved in the development of a MR simulator for the rasping procedure in artificial cervical disc replacement (ACDR) surgery are investigated. The major components that make up the MR surgical simulator with motion tracking system are addressed. </p> <p>Findings</p> <p>A mixed reality surgical simulator that targets rasping procedure in the artificial cervical disc replacement surgery with a VICON motion tracking system was developed. There were several challenges in the development of MR surgical simulator. First, the assembly of different hardware components for surgical simulation development that involves knowledge and application of interdisciplinary fields such as signal processing, computer vision and graphics, along with the design and placements of sensors etc . Second challenge was the creation of a physically correct model of the rasping procedure in order to attain critical forces. This challenge was handled with finite element modeling. The third challenge was minimization of error in mapping movements of an actor in real model to a virtual model in a process called registration. This issue was overcome by a two-way (virtual object to real domain and real domain to virtual object) semi-automatic registration method.</p> <p>Conclusions</p> <p>The applicability of the VICON MR setting for the ACDR surgical simulator is demonstrated. The main stream problems encountered in MR surgical simulator development are addressed. First, an effective environment for MR surgical development is constructed. Second, the strain and the stress intensities and critical forces are simulated under the various rasp instrument loadings with impacts that are applied on intervertebral surfaces of the anterior vertebrae throughout the rasping procedure. Third, two approaches are introduced to solve the registration problem in MR setting. Results show that our system creates an effective environment for surgical simulation development and solves tedious and time-consuming registration problems caused by misalignments. Further, the MR ACDR surgery simulator was tested by 5 different physicians who found that the MR simulator is effective enough to teach the anatomical details of cervical discs and to grasp the basics of the ACDR surgery and rasping procedure</p

    Augmented Reality to Improve Surgical Workflow in Minimally Invasive Transforaminal Lumbar Interbody Fusion – A Feasibility Study With Case Series

    Get PDF
    Objective Minimally invasive transforaminal lumbar interbody fusion (MIS-TLIF) is a highly reproducible procedure for the fusion of spinal segments. We recently introduced the concept of “total navigation” to improve workflow and eliminate fluoroscopy. Image-guided surgery incorporating augmented reality (AR) may further facilitate workflow. In this study, we developed and evaluated a protocol to integrate AR into the workflow of MIS-TLIF. Methods A case series of 10 patients was the basis for the evaluation of a protocol to facilitate tubular MIS-TLIF by the application of AR. Surgical TLIF landmarks were marked on a preoperative computed tomography (CT)-scan using dedicated software. This marked CT scan was fused intraoperatively with the low-dose navigation CT scan using elastic image fusion, and the markers were transferred to the intraoperative scan. Our experience with this workflow and the surgical outcomes were collected. Results Our AR protocol was safely implemented in all cases. The TLIF landmarks could be preoperatively planned and transferred to the intraoperative imaging. Of the 10 cases, 1 case had additionally a synovial cyst resection and in 2 cases an additional bony decompression was performed due to central stenosis. The average procedure time was 160.6±31.9 minutes. The AR implementation added 1.72±0.37 minutes to the overall procedure time. No complications occurred. Conclusion Our findings support the idea that total navigation with AR may further facilitate the workflow, especially in cases with more complex anatomy and for teaching and training purposes. More work is needed to simplify the software and make AR integration more user-friendly

    Augmented reality for computer assisted orthopaedic surgery

    Get PDF
    In recent years, computer-assistance and robotics have established their presence in operating theatres and found success in orthopaedic procedures. Benefits of computer assisted orthopaedic surgery (CAOS) have been thoroughly explored in research, finding improvements in clinical outcomes, through increased control and precision over surgical actions. However, human-computer interaction in CAOS remains an evolving field, through emerging display technologies including augmented reality (AR) – a fused view of the real environment with virtual, computer-generated holograms. Interactions between clinicians and patient-specific data generated during CAOS are limited to basic 2D interactions on touchscreen monitors, potentially creating clutter and cognitive challenges in surgery. Work described in this thesis sought to explore the benefits of AR in CAOS through: an integration between commercially available AR and CAOS systems, creating a novel AR-centric surgical workflow to support various tasks of computer-assisted knee arthroplasty, and three pre–clinical studies exploring the impact of the new AR workflow on both existing and newly proposed quantitative and qualitative performance metrics. Early research focused on cloning the (2D) user-interface of an existing CAOS system onto a virtual AR screen and investigating any resulting impacts on usability and performance. An infrared-based registration system is also presented, describing a protocol for calibrating commercial AR headsets with optical trackers, calculating a spatial transformation between surgical and holographic coordinate frames. The main contribution of this thesis is a novel AR workflow designed to support computer-assisted patellofemoral arthroplasty. The reported workflow provided 3D in-situ holographic guidance for CAOS tasks including patient registration, pre-operative planning, and assisted-cutting. Pre-clinical experimental validation on a commercial system (NAVIO®, Smith & Nephew) for these contributions demonstrates encouraging early-stage results showing successful deployment of AR to CAOS systems, and promising indications that AR can enhance the clinician’s interactions in the future. The thesis concludes with a summary of achievements, corresponding limitations and future research opportunities.Open Acces

    Review on Image Guided Surgery Systems

    Get PDF
    Nowadays modern imaging techniques can grant an excellent quality 3D images that clearly show the anatomy, vascularity, pathology and active functions of the tissues. The ability to register these preoperative images to each other, to offer a comprehensive information, and later the ability to register the image space to the patient space intraoperatively is the core for the image guided surgery systems (IGS). Other main elements of the system include the process of tracking the surgical tools intraoperatively by reflecting their positions within the 3D image model. In some occasions an intraoperative image may be acquired and registered to the preoperative images to make sure the 3D model used to guide the operation describes the actual situation at surgery time. This survey overviews the history of IGS and discusses the modern system components for a reliable application and gives information about the different applications in medical specialties that benefited from the use of IGS

    Validazione di un dispositivo indossabile basato sulla realta aumentata per il riposizionamento del mascellare superiore

    Get PDF
    Aim: We present a newly designed, localiser-free, head-mounted system featuring augmented reality (AR) as an aid to maxillofacial bone surgery, and assess the potential utility of the device by conducting a feasibility study and validation. Also, we implement a novel and ergonomic strategy designed to present AR information to the operating surgeon (hPnP). Methods: The head-mounted wearable system was developed as a stand- alone, video-based, see-through device in which the visual features were adapted to facilitate maxillofacial bone surgery. The system is designed to exhibit virtual planning overlaying the details of a real patient. We implemented a method allowing performance of waferless, AR-assisted maxillary repositioning. In vitro testing was conducted on a physical replica of a human skull. Surgical accuracy was measured. The outcomes were compared with those expected to be achievable in a three-dimensional environment. Data were derived using three levels of surgical planning, of increasing complexity, and for nine different operators with varying levels of surgical skill. Results: The mean linear error was 1.70±0.51mm. The axial errors were 0.89±0.54mm on the sagittal axis, 0.60±0.20mm on the frontal axis, and 1.06±0.40mm on the craniocaudal axis. Mean angular errors were also computed. Pitch: 3.13°±1.89°; Roll: 1.99°±0.95°; Yaw: 3.25°±2.26°. No significant difference in terms of error was noticed among operators, despite variations in surgical experience. Feedback from surgeons was acceptable; all tests were completed within 15 min and the tool was considered to be both comfortable and usable in practice. Conclusion: Our device appears to be accurate when used to assist in waferless maxillary repositioning. Our results suggest that the method can potentially be extended for use with many surgical procedures on the facial skeleton. Further, it would be appropriate to proceed to in vivo testing to assess surgical accuracy under real clinical conditions.Obiettivo: Presentare un nuovo sistema indossabile, privo di sistema di tracciamento esterno, che utilizzi la realtà aumentata come ausilio alla chirurgia ossea maxillo-facciale. Abbiamo validato il dispositivo. Inoltre, abbiamo implementato un nuovo metodo per presentare le informazioni aumentate al chirurgo (hPnP). Metodi: Le caratteristiche di visualizzazione del sistema, basato sul paradigma video see-through, sono state sviluppate specificamente per la chirurgia ossea maxillo-facciale. Il dispositivo è progettato per mostrare la pianificazione virtuale della chirurgia sovrapponendola all’anatomia del paziente. Abbiamo implementato un metodo che consente una tecnica senza splint, basata sulla realtà aumentata, per il riposizionamento del mascellare superiore. Il test in vitro è stato condotto su una replica di un cranio umano. La precisione chirurgica è stata misurata confrontando i risultati reali con quelli attesi. Il test è stato condotto utilizzando tre pianificazioni chirurgiche di crescente complessità, per nove operatori con diversi livelli di abilità chirurgica. Risultati: L'errore lineare medio è stato di 1,70±0,51mm. Gli errori assiali erano: 0,89±0,54mm sull'asse sagittale, 0,60±0,20mm sull'asse frontale, e 1,06±0,40mm sull'asse craniocaudale. Anche gli errori angolari medi sono stati calcolati. Beccheggio: 3.13°±1,89°; Rollio: 1,99°±0,95°; Imbardata: 3.25°±2,26°. Nessuna differenza significativa in termini di errore è stata rilevata tra gli operatori. Il feedback dei chirurghi è stato soddisfacente; tutti i test sono stati completati entro 15 minuti e lo strumento è stato considerato comodo e utilizzabile nella pratica. Conclusione: Il nostro dispositivo sembra essersi dimostrato preciso se utilizzato per eseguire il riposizionamento del mascellare superiore senza splint. I nostri risultati suggeriscono che il metodo può potenzialmente essere esteso ad altre procedure chirurgiche sullo scheletro facciale. Inoltre, appare utile procedere ai test in vivo per valutare la precisione chirurgica in condizioni cliniche reali
    • …
    corecore