3,574 research outputs found

    On uncertainty propagation in image-guided renal navigation: Exploring uncertainty reduction techniques through simulation and in vitro phantom evaluation

    Get PDF
    Image-guided interventions (IGIs) entail the use of imaging to augment or replace direct vision during therapeutic interventions, with the overall goal is to provide effective treatment in a less invasive manner, as an alternative to traditional open surgery, while reducing patient trauma and shortening the recovery time post-procedure. IGIs rely on pre-operative images, surgical tracking and localization systems, and intra-operative images to provide correct views of the surgical scene. Pre-operative images are used to generate patient-specific anatomical models that are then registered to the patient using the surgical tracking system, and often complemented with real-time, intra-operative images. IGI systems are subject to uncertainty from several sources, including surgical instrument tracking / localization uncertainty, model-to-patient registration uncertainty, user-induced navigation uncertainty, as well as the uncertainty associated with the calibration of various surgical instruments and intra-operative imaging devices (i.e., laparoscopic camera) instrumented with surgical tracking sensors. All these uncertainties impact the overall targeting accuracy, which represents the error associated with the navigation of a surgical instrument to a specific target to be treated under image guidance provided by the IGI system. Therefore, understanding the overall uncertainty of an IGI system is paramount to the overall outcome of the intervention, as procedure success entails achieving certain accuracy tolerances specific to individual procedures. This work has focused on studying the navigation uncertainty, along with techniques to reduce uncertainty, for an IGI platform dedicated to image-guided renal interventions. We constructed life-size replica patient-specific kidney models from pre-operative images using 3D printing and tissue emulating materials and conducted experiments to characterize the uncertainty of both optical and electromagnetic surgical tracking systems, the uncertainty associated with the virtual model-to-physical phantom registration, as well as the uncertainty associated with live augmented reality (AR) views of the surgical scene achieved by enhancing the pre-procedural model and tracked surgical instrument views with live video views acquires using a camera tracked in real time. To better understand the effects of the tracked instrument calibration, registration fiducial configuration, and tracked camera calibration on the overall navigation uncertainty, we conducted Monte Carlo simulations that enabled us to identify optimal configurations that were subsequently validated experimentally using patient-specific phantoms in the laboratory. To mitigate the inherent accuracy limitations associated with the pre-procedural model-to-patient registration and their effect on the overall navigation, we also demonstrated the use of tracked video imaging to update the registration, enabling us to restore targeting accuracy to within its acceptable range. Lastly, we conducted several validation experiments using patient-specific kidney emulating phantoms using post-procedure CT imaging as reference ground truth to assess the accuracy of AR-guided navigation in the context of in vitro renal interventions. This work helped find answers to key questions about uncertainty propagation in image-guided renal interventions and led to the development of key techniques and tools to help reduce optimize the overall navigation / targeting uncertainty

    Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    Augmented Reality Simulation Modules for EVD Placement Training and Planning Aids

    Get PDF
    When a novice neurosurgeon performs a psychomotor surgical task (e.g., tool navigation into brain structures), a potential risk of damaging healthy tissues and eloquent brain structures is unavoidable. When novices make multiple hits, thus a set of undesirable trajectories is created, and resulting in the potential for surgical complications. Thus, it is important that novices not only aim for a high-level of surgical mastery but also receive deliberate training in common neurosurgical procedures and underlying tasks. Surgical simulators have emerged as an adequate candidate as effective method to teach novices in safe and free-error training environments. The design of neurosurgical simulators requires a comprehensive approach to development and. In that in mind, we demonstrate a detailed case study in which two Augmented Reality (AR) training simulation modules were designed and implemented through the adoption of Model-driven Engineering. User performance evaluation is a key aspect of the surgical simulation validity. Many AR surgical simulators become obsolete; either they are not sufficient to support enough surgical scenarios, or they were validated according to subjective assessments that did not meet every need. Accordingly, we demonstrate the feasibility of the AR simulation modules through two user studies, objectively measuring novicesā€™ performance based on quantitative metrics. Neurosurgical simulators are prone to perceptual distance underestimation. Few investigations were conducted for improving user depth perception in head-mounted display-based AR systems with perceptual motion cues. Consequently, we report our investigationā€™s results about whether or not head motion and perception motion cues had an influence on usersā€™ performance

    Visual Perception and Cognition in Image-Guided Intervention

    Get PDF
    Surgical image visualization and interaction systems can dramatically affect the efficacy and efficiency of surgical training, planning, and interventions. This is even more profound in the case of minimally-invasive surgery where restricted access to the operative field in conjunction with limited field of view necessitate a visualization medium to provide patient-specific information at any given moment. Unfortunately, little research has been devoted to studying human factors associated with medical image displays and the need for a robust, intuitive visualization and interaction interfaces has remained largely unfulfilled to this day. Failure to engineer efficient medical solutions and design intuitive visualization interfaces is argued to be one of the major barriers to the meaningful transfer of innovative technology to the operating room. This thesis was, therefore, motivated by the need to study various cognitive and perceptual aspects of human factors in surgical image visualization systems, to increase the efficiency and effectiveness of medical interfaces, and ultimately to improve patient outcomes. To this end, we chose four different minimally-invasive interventions in the realm of surgical training, planning, training for planning, and navigation: The first chapter involves the use of stereoendoscopes to reduce morbidity in endoscopic third ventriculostomy. The results of this study suggest that, compared with conventional endoscopes, the detection of the basilar artery on the surface of the third ventricle can be facilitated with the use of stereoendoscopes, increasing the safety of targeting in third ventriculostomy procedures. In the second chapter, a contour enhancement technique is described to improve preoperative planning of arteriovenous malformation interventions. The proposed method, particularly when combined with stereopsis, is shown to increase the speed and accuracy of understanding the spatial relationship between vascular structures. In the third chapter, an augmented-reality system is proposed to facilitate the training of planning brain tumour resection. The results of our user study indicate that the proposed system improves subjects\u27 performance, particularly novices\u27, in formulating the optimal point of entry and surgical path independent of the sensorimotor tasks performed. In the last chapter, the role of fully-immersive simulation environments on the surgeons\u27 non-technical skills to perform vertebroplasty procedure is investigated. Our results suggest that while training surgeons may increase their technical skills, the introduction of crisis scenarios significantly disturbs the performance, emphasizing the need of realistic simulation environments as part of training curriculum

    Augmented Reality Based Surgical Navigation of Complex Pelvic Osteotomies

    Full text link
    first_page loading... settings Open AccessArticle Augmented Reality Based Surgical Navigation of Complex Pelvic Osteotomiesā€”A Feasibility Study on Cadavers by JoĆ«lle Ackermann 1,2,ā€  [ORCID] , Florentin Liebmann 1,2,*,ā€  [ORCID] , Armando Hoch 3 [ORCID] , Jess G. Snedeker 2,3, Mazda Farshad 3, Stefan Rahm 3, Patrick O. Zingg 3 and Philipp FĆ¼rnstahl 1 1 Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland 2 Laboratory for Orthopaedic Biomechanics, ETH Zurich, 8093 Zurich, Switzerland 3 Department of Orthopedics, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland * Author to whom correspondence should be addressed. ā€  These authors contributed equally to this work. Academic Editor: Jiro Tanaka Appl. Sci. 2021, 11(3), 1228; https://doi.org/10.3390/app11031228 Received: 20 December 2020 / Revised: 13 January 2021 / Accepted: 25 January 2021 / Published: 29 January 2021 (This article belongs to the Special Issue Artificial Intelligence (AI) and Virtual Reality (VR) in Biomechanics) Download PDF Browse Figures Citation Export Abstract Augmented reality (AR)-based surgical navigation may offer new possibilities for safe and accurate surgical execution of complex osteotomies. In this study we investigated the feasibility of navigating the periacetabular osteotomy of Ganz (PAO), known as one of the most complex orthopedic interventions, on two cadaveric pelves under realistic operating room conditions. Preoperative planning was conducted on computed tomography (CT)-reconstructed 3D models using an in-house developed software, which allowed creating cutting plane objects for planning of the osteotomies and reorientation of the acetabular fragment. An AR application was developed comprising point-based registration, motion compensation and guidance for osteotomies as well as fragment reorientation. Navigation accuracy was evaluated on CT-reconstructed 3D models, resulting in an error of 10.8 mm for osteotomy starting points and 5.4Ā° for osteotomy directions. The reorientation errors were 6.7Ā°, 7.0Ā° and 0.9Ā° for the x-, y- and z-axis, respectively. Average postoperative error of LCE angle was 4.5Ā°. Our study demonstrated that the AR-based execution of complex osteotomies is feasible. Fragment realignment navigation needs further improvement, although it is more accurate than the state of the art in PAO surgery

    Immersive Visualization for Enhanced Computational Fluid Dynamics Analysis

    Get PDF
    Modern biomedical computer simulations produce spatiotemporal results that are often viewed at a single point in time on standard 2D displays. An immersive visualization environment (IVE) with 3D stereoscopic capability can mitigate some shortcomings of 2D displays via improved depth cues and active movement to further appreciate the spatial localization of imaging data with temporal computational fluid dynamics (CFD) results. We present a semi-automatic workflow for the import, processing, rendering, and stereoscopic visualization of high resolution, patient-specific imaging data, and CFD results in an IVE. Versatility of the workflow is highlighted with current clinical sequelae known to be influenced by adverse hemodynamics to illustrate potential clinical utility

    South African university studentsā€™ attitudes towards chemistry learning in a virtually simulated learning environment

    Get PDF
    Abstract: This mixed method study investigated changes in 3rd year Bachelor of Education studentsā€™ attitudes towards chemistry after learning interventions with virtual chemistry simulations. After participant students identified certain concepts from their 3rd year chemistry module as being abstract and not easily comprehensible, these concepts were facilitated during a 5 week learning intervention using PhET simulations as an alternative to traditional laboratory experimentation. In the first quantitative phase of the study, a 30-item pre-attitude test was administered to assess studentsā€™ attitudes towards chemistry, followed by PhET chemistry simulation learning interventions. Thereafter, students wrote a post-attitude test. Findings of this phase revealed a significantly higher mean post-attitude test score, with students showing a positive attitude towards chemistry learning, post-intervention. The quantitative phase was followed by qualitative phase which examined studentsā€™ experiences on the use of simulations through semi-structured interviews. Findings from the qualitative phase revealed that, students experienced autonomy and enjoyment during engagement with the simulations. They also perceived that their experiences in the virtually simulated environment improved their visualisation of chemistry concepts, thereby improving conceptual understanding. However, the students acknowledged that simulations cannot replicate the realism and authenticity associated with practical work in an actual laboratory. The implications of these findings are that virtual simulations provide a complementary learning tool capable of improving studentsā€™ attitudes towards chemistry, and perceived by students to support their visualisation of abstract chemistry concepts. The findings of this study are particularly significant for chemistry learning at schools and universities in economically challenged countries such as South Africa where there is a shortage of well-equipped laboratories

    The HoloLens in Medicine: A systematic Review and Taxonomy

    Full text link
    The HoloLens (Microsoft Corp., Redmond, WA), a head-worn, optically see-through augmented reality display, is the main player in the recent boost in medical augmented reality research. In medical settings, the HoloLens enables the physician to obtain immediate insight into patient information, directly overlaid with their view of the clinical scenario, the medical student to gain a better understanding of complex anatomies or procedures, and even the patient to execute therapeutic tasks with improved, immersive guidance. In this systematic review, we provide a comprehensive overview of the usage of the first-generation HoloLens within the medical domain, from its release in March 2016, until the year of 2021, were attention is shifting towards it's successor, the HoloLens 2. We identified 171 relevant publications through a systematic search of the PubMed and Scopus databases. We analyze these publications in regard to their intended use case, technical methodology for registration and tracking, data sources, visualization as well as validation and evaluation. We find that, although the feasibility of using the HoloLens in various medical scenarios has been shown, increased efforts in the areas of precision, reliability, usability, workflow and perception are necessary to establish AR in clinical practice.Comment: 35 pages, 11 figure
    • ā€¦
    corecore