19,557 research outputs found
Evaluating Human Performance for Image-Guided Surgical Tasks
The following work focuses on the objective evaluation of human performance for two different interventional tasks; targeted prostate biopsy tasks using a tracked biopsy device, and external ventricular drain placement tasks using a mobile-based augmented reality device for visualization and guidance. In both tasks, a human performance methodology was utilized which respects the trade-off between speed and accuracy for users conducting a series of targeting tasks using each device. This work outlines the development and application of performance evaluation methods using these devices, as well as details regarding the implementation of the mobile AR application. It was determined that the Fitts’ Law methodology can be applied for evaluation of tasks performed in each surgical scenario, and was sensitive to differentiate performance across a range which spanned experienced and novice users. This methodology is valuable for future development of training modules for these and other medical devices, and can provide details about the underlying characteristics of the devices, and how they can be optimized with respect to human performance
Virtual Reality for Obsessive-Compulsive Disorder: Past and the Future
The use of computers, especially for virtual reality (VR), to understand, assess, and treat various mental health problems has been developed for the last decade, including application for phobia, post-traumatic stress disorder, attention deficits, and schizophrenia. However, the number of VR tools addressing obsessive-compulsive disorder (OCD) is still lacking due to the heterogeneous symptoms of OCD and poor understanding of the relationship between VR and OCD. This article reviews the empirical literatures for VR tools in the future, which involve applications for both clinical work and experimental research in this area, including examining symptoms using VR according to OCD patients' individual symptoms, extending OCD research in the VR setting to also study behavioral and physiological correlations of the symptoms, and expanding the use of VR for OCD to cognitive-behavioral intervention
Microscope Embedded Neurosurgical Training and Intraoperative System
In the recent years, neurosurgery has been strongly influenced by new technologies. Computer Aided Surgery (CAS) offers several benefits for patients\u27 safety but fine techniques targeted to obtain minimally invasive and traumatic treatments are required, since intra-operative false movements can be devastating, resulting in patients deaths. The precision of the surgical gesture is related both to accuracy of the available technological instruments and surgeon\u27s experience. In this frame, medical training is particularly important. From a technological point of view, the use of Virtual Reality (VR) for surgeon training and Augmented Reality (AR) for intra-operative treatments offer the best results.
In addition, traditional techniques for training in surgery include the use of animals, phantoms and cadavers. The main limitation of these approaches is that live tissue has different properties from dead tissue and that animal anatomy is significantly different from the human. From the medical point of view, Low-Grade Gliomas (LGGs) are intrinsic brain tumours that typically occur in younger adults. The objective of related treatment is to remove as much of the tumour as possible while minimizing damage to the healthy brain. Pathological tissue may closely resemble normal brain parenchyma when looked at through the neurosurgical microscope. The tactile appreciation of the different consistency of the tumour compared to normal brain requires considerable experience on the part of the neurosurgeon and it is a vital point.
The first part of this PhD thesis presents a system for realistic simulation (visual and haptic) of the spatula palpation of the LGG. This is the first prototype of a training system using VR, haptics and a real microscope for neurosurgery.
This architecture can be also adapted for intra-operative purposes. In this instance, a surgeon needs the basic setup for the Image Guided Therapy (IGT) interventions: microscope, monitors and navigated surgical instruments. The same virtual environment can be AR rendered onto the microscope optics. The objective is to enhance the surgeon\u27s ability for a better intra-operative orientation by giving him a three-dimensional view and other information necessary for a safe navigation inside the patient.
The last considerations have served as motivation for the second part of this work which has been devoted to improving a prototype of an AR stereoscopic microscope for neurosurgical interventions, developed in our institute in a previous work. A completely new software has been developed in order to reuse the microscope hardware, enhancing both rendering performances and usability.
Since both AR and VR share the same platform, the system can be referred to as Mixed Reality System for neurosurgery.
All the components are open source or at least based on a GPL license
The HoloLens in Medicine: A systematic Review and Taxonomy
The HoloLens (Microsoft Corp., Redmond, WA), a head-worn, optically
see-through augmented reality display, is the main player in the recent boost
in medical augmented reality research. In medical settings, the HoloLens
enables the physician to obtain immediate insight into patient information,
directly overlaid with their view of the clinical scenario, the medical student
to gain a better understanding of complex anatomies or procedures, and even the
patient to execute therapeutic tasks with improved, immersive guidance. In this
systematic review, we provide a comprehensive overview of the usage of the
first-generation HoloLens within the medical domain, from its release in March
2016, until the year of 2021, were attention is shifting towards it's
successor, the HoloLens 2. We identified 171 relevant publications through a
systematic search of the PubMed and Scopus databases. We analyze these
publications in regard to their intended use case, technical methodology for
registration and tracking, data sources, visualization as well as validation
and evaluation. We find that, although the feasibility of using the HoloLens in
various medical scenarios has been shown, increased efforts in the areas of
precision, reliability, usability, workflow and perception are necessary to
establish AR in clinical practice.Comment: 35 pages, 11 figure
Virtual and Augmented Reality Techniques for Minimally Invasive Cardiac Interventions: Concept, Design, Evaluation and Pre-clinical Implementation
While less invasive techniques have been employed for some procedures, most intracardiac interventions are still performed under cardiopulmonary bypass, on the drained, arrested heart. The progress toward off-pump intracardiac interventions has been hampered by the lack of adequate visualization inside the beating heart.
This thesis describes the development, assessment, and pre-clinical implementation of a mixed reality environment that integrates pre-operative imaging and modeling with surgical tracking technologies and real-time ultrasound imaging. The intra-operative echo images are augmented with pre-operative representations of the cardiac anatomy and virtual models of the delivery instruments tracked in real time using magnetic tracking technologies. As a result, the otherwise context-less images can now be interpreted within the anatomical context provided by the anatomical models. The virtual models assist the user with the tool-to-target navigation, while real-time ultrasound ensures accurate positioning of the tool on target, providing the surgeon with sufficient information to ``see\u27\u27 and manipulate instruments in absence of direct vision.
Several pre-clinical acute evaluation studies have been conducted in vivo on swine models to assess the feasibility of the proposed environment in a clinical context. Following direct access inside the beating heart using the UCI, the proposed mixed reality environment was used to provide the necessary visualization and navigation to position a prosthetic mitral valve on the the native annulus, or to place a repair patch on a created septal defect in vivo in porcine models.
Following further development and seamless integration into the clinical workflow, we hope that the proposed mixed reality guidance environment may become a significant milestone toward enabling minimally invasive therapy on the beating heart
Recommended from our members
Feasibility Evaluation of Commercially Available Video Conferencing Devices to Technically Direct Untrained Nonmedical Personnel to Perform a Rapid Trauma Ultrasound Examination.
Introduction: Point-of-care ultrasound (POCUS) is a rapidly expanding discipline that has proven to be a valuable modality in the hospital setting. Recent evidence has demonstrated the utility of commercially available video conferencing technologies, namely, FaceTime (Apple Inc, Cupertino, CA, USA) and Google Glass (Google Inc, Mountain View, CA, USA), to allow an expert POCUS examiner to remotely guide a novice medical professional. However, few studies have evaluated the ability to use these teleultrasound technologies to guide a nonmedical novice to perform an acute care POCUS examination for cardiac, pulmonary, and abdominal assessments. Additionally, few studies have shown the ability of a POCUS-trained cardiac anesthesiologist to perform the role of an expert instructor. This study sought to evaluate the ability of a POCUS-trained anesthesiologist to remotely guide a nonmedically trained participant to perform an acute care POCUS examination. Methods: A total of 21 nonmedically trained undergraduate students who had no prior ultrasound experience were recruited to perform a three-part ultrasound examination on a standardized patient with the guidance of a remote expert who was a POCUS-trained cardiac anesthesiologist. The examination included the following acute care POCUS topics: (1) cardiac function via parasternal long/short axis views, (2) pneumothorax assessment via pleural sliding exam via anterior lung views, and (3) abdominal free fluid exam via right upper quadrant abdominal view. Each examiner was given a handout with static images of probe placement and actual ultrasound images for the three views. After a brief 8 min tutorial on the teleultrasound technologies, a connection was established with the expert, and they were guided through the acute care POCUS exam. Each view was deemed to be complete when the expert sonographer was satisfied with the obtained image or if the expert sonographer determined that the image could not be obtained after 5 min. Image quality was scored on a previously validated 0 to 4 grading scale. The entire session was recorded, and the image quality was scored during the exam by the remote expert instructor as well as by a separate POCUS-trained, blinded expert anesthesiologist. Results: A total of 21 subjects completed the study. The average total time for the exam was 8.5 min (standard deviation = 4.6). A comparison between the live expert examiner and the blinded postexam reviewer showed a 100% agreement between image interpretations. A review of the exams rated as three or higher demonstrated that 87% of abdominal, 90% of cardiac, and 95% of pulmonary exams achieved this level of image quality. A satisfaction survey of the novice users demonstrated higher ease of following commands for the cardiac and pulmonary exams compared to the abdominal exam. Conclusions: The results from this pilot study demonstrate that nonmedically trained individuals can be guided to complete a relevant ultrasound examination within a short period. Further evaluation of using telemedicine technologies to promote POCUS should be evaluated
Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery
One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions
- …