6,130 research outputs found

    How to Build a Patient-Specific Hybrid Simulator for Orthopaedic Open Surgery: Benefits and Limits of Mixed-Reality Using the Microsoft HoloLens

    Get PDF
    Orthopaedic simulators are popular in innovative surgical training programs, where trainees gain procedural experience in a safe and controlled environment. Recent studies suggest that an ideal simulator should combine haptic, visual, and audio technology to create an immersive training environment. This article explores the potentialities of mixed-reality using the HoloLens to develop a hybrid training system for orthopaedic open surgery. Hip arthroplasty, one of the most common orthopaedic procedures, was chosen as a benchmark to evaluate the proposed system. Patient-specific anatomical 3D models were extracted from a patient computed tomography to implement the virtual content and to fabricate the physical components of the simulator. Rapid prototyping was used to create synthetic bones. The Vuforia SDK was utilized to register virtual and physical contents. The Unity3D game engine was employed to develop the software allowing interactions with the virtual content using head movements, gestures, and voice commands. Quantitative tests were performed to estimate the accuracy of the system by evaluating the perceived position of augmented reality targets. Mean and maximum errors matched the requirements of the target application. Qualitative tests were carried out to evaluate workload and usability of the HoloLens for our orthopaedic simulator, considering visual and audio perception and interaction and ergonomics issues. The perceived overall workload was low, and the self-assessed performance was considered satisfactory. Visual and audio perception and gesture and voice interactions obtained a positive feedback. Postural discomfort and visual fatigue obtained a nonnegative evaluation for a simulation session of 40 minutes. These results encourage using mixed-reality to implement a hybrid simulator for orthopaedic open surgery. An optimal design of the simulation tasks and equipment setup is required to minimize the user discomfort. Future works will include Face Validity, Content Validity, and Construct Validity to complete the assessment of the hip arthroplasty simulator

    How to Build a Patient-Specific Hybrid Simulator for Orthopaedic Open Surgery: Benefits and Limits of Mixed-Reality Using the Microsoft HoloLens

    Get PDF
    Orthopaedic simulators are popular in innovative surgical training programs, where trainees gain procedural experience in a safe and controlled environment. Recent studies suggest that an ideal simulator should combine haptic, visual, and audio technology to create an immersive training environment. This article explores the potentialities of mixed-reality using the HoloLens to develop a hybrid training system for orthopaedic open surgery. Hip arthroplasty, one of the most common orthopaedic procedures, was chosen as a benchmark to evaluate the proposed system. Patient-specific anatomical 3D models were extracted from a patient computed tomography to implement the virtual content and to fabricate the physical components of the simulator. Rapid prototyping was used to create synthetic bones. The Vuforia SDK was utilized to register virtual and physical contents. The Unity3D game engine was employed to develop the software allowing interactions with the virtual content using head movements, gestures, and voice commands. Quantitative tests were performed to estimate the accuracy of the system by evaluating the perceived position of augmented reality targets. Mean and maximum errors matched the requirements of the target application. Qualitative tests were carried out to evaluate workload and usability of the HoloLens for our orthopaedic simulator, considering visual and audio perception and interaction and ergonomics issues. The perceived overall workload was low, and the self-assessed performance was considered satisfactory. Visual and audio perception and gesture and voice interactions obtained a positive feedback. Postural discomfort and visual fatigue obtained a nonnegative evaluation for a simulation session of 40 minutes. These results encourage using mixed-reality to implement a hybrid simulator for orthopaedic open surgery. An optimal design of the simulation tasks and equipment setup is required to minimize the user discomfort. Future works will include Face Validity, Content Validity, and Construct Validity to complete the assessment of the hip arthroplasty simulator

    Aerospace Medicine and Biology: A continuing bibliography with indexes, supplement 192

    Get PDF
    This bibliography lists 247 reports, articles, and other documents introduced into the NASA scientific and technical information system in March 1979

    Piloting Multimodal Learning Analytics using Mobile Mixed Reality in Health Education

    Get PDF
    © 2019 IEEE. Mobile mixed reality has been shown to increase higher achievement and lower cognitive load within spatial disciplines. However, traditional methods of assessment restrict examiners ability to holistically assess spatial understanding. Multimodal learning analytics seeks to investigate how combinations of data types such as spatial data and traditional assessment can be combined to better understand both the learner and learning environment. This paper explores the pedagogical possibilities of a smartphone enabled mixed reality multimodal learning analytics case study for health education, focused on learning the anatomy of the heart. The context for this study is the first loop of a design based research study exploring the acquisition and retention of knowledge by piloting the proposed system with practicing health experts. Outcomes from the pilot study showed engagement and enthusiasm of the method among the experts, but also demonstrated problems to overcome in the pedagogical method before deployment with learners

    SURGICAL NAVIGATION AND AUGMENTED REALITY FOR MARGINS CONTROL IN HEAD AND NECK CANCER

    Get PDF
    I tumori maligni del distretto testa-collo rappresentano un insieme di lesioni dalle diverse caratteristiche patologiche, epidemiologiche e prognostiche. Per una porzione considerevole di tali patologie, l’intervento chirurgico finalizzato all’asportazione completa del tumore rappresenta l’elemento chiave del trattamento, quand’anche esso includa altre modalità quali la radioterapia e la terapia sistemica. La qualità dell’atto chirurgico ablativo è pertanto essenziale al fine di garantire le massime chance di cura al paziente. Nell’ambito della chirurgia oncologica, la qualità delle ablazioni viene misurata attraverso l’analisi dello stato dei margini di resezione. Oltre a rappresentare un surrogato della qualità della resezione chirurgica, lo stato dei margini di resezione ha notevoli implicazioni da un punto di vista clinico e prognostico. Infatti, il coinvolgimento dei margini di resezione da parte della neoplasia rappresenta invariabilmente un fattore prognostico sfavorevole, oltre che implicare la necessità di intensificare i trattamenti postchirurgici (e.g., ponendo indicazione alla chemioradioterapia adiuvante), comportando una maggiore tossicità per il paziente. La proporzione di resezioni con margini positivi (i.e., coinvolti dalla neoplasia) nel distretto testa-collo è tra le più elevate in ambito di chirurgia oncologica. In tale contesto si pone l’obiettivo del dottorato di cui questa tesi riporta i risultati. Le due tecnologie di cui si è analizzata l’utilità in termini di ottimizzazione dello stato dei margini di resezione sono la navigazione chirurgica con rendering tridimensionale e la realtà aumentata basata sulla videoproiezione di immagini. Le sperimentazioni sono state svolte parzialmente presso l’Università degli Studi di Brescia, parzialmente presso l’Azienda Ospedale Università di Padova e parzialmente presso l’University Health Network (Toronto, Ontario, Canada). I risultati delle sperimentazioni incluse in questo elaborato dimostrano che l'impiego della navigazione chirurgica con rendering tridimensionale nel contesto di procedure oncologiche ablative cervico-cefaliche risulta associata ad un vantaggio significativo in termini di riduzione della frequenza di margini positivi. Al contrario, le tecniche di realtà aumentata basata sulla videoproiezione, nell'ambito della sperimentazione preclinica effettuata, non sono risultate associate a vantaggi sufficienti per poter considerare tale tecnologia per la traslazione clinica.Head and neck malignancies are an heterogeneous group of tumors. Surgery represents the mainstay of treatment for the large majority of head and neck cancers, with ablation being aimed at removing completely the tumor. Radiotherapy and systemic therapy have also a substantial role in the multidisciplinary management of head and neck cancers. The quality of surgical ablation is intimately related to margin status evaluated at a microscopic level. Indeed, margin involvement has a remarkably negative effect on prognosis of patients and mandates the escalation of postoperative treatment by adding concomitant chemotherapy to radiotherapy and accordingly increasing the toxicity of overall treatment. The rate of margin involvement in the head and neck is among the highest in the entire field of surgical oncology. In this context, the present PhD project was aimed at testing the utility of 2 technologies, namely surgical navigation with 3-dimensional rendering and pico projector-based augmented reality, in decreasing the rate of involved margins during oncologic surgical ablations in the craniofacial area. Experiments were performed in the University of Brescia, University of Padua, and University Health Network (Toronto, Ontario, Canada). The research activities completed in the context of this PhD course demonstrated that surgical navigation with 3-dimensional rendering confers a higher quality to oncologic ablations in the head and neck, irrespective of the open or endoscopic surgical technique. The benefits deriving from this implementation come with no relevant drawbacks from a logistical and practical standpoint, nor were major adverse events observed. Thus, implementation of this technology into the standard care is the logical proposed step forward. However, the genuine presence of a prognostic advantage needs longer and larger study to be formally addressed. On the other hand, pico projector-based augmented reality showed no sufficient advantages to encourage translation into the clinical setting. Although observing a clear practical advantage deriving from the projection of osteotomy lines onto the surgical field, no substantial benefits were measured when comparing this technology with surgical navigation with 3-dimensional rendering. Yet recognizing a potential value of this technology from an educational standpoint, the performance displayed in the preclinical setting in terms of surgical margins optimization is not in favor of a clinical translation with this specific aim

    Prefrontal cortex activation upon a demanding virtual hand-controlled task: A new frontier for neuroergonomics

    Get PDF
    open9noFunctional near-infrared spectroscopy (fNIRS) is a non-invasive vascular-based functional neuroimaging technology that can assess, simultaneously from multiple cortical areas, concentration changes in oxygenated-deoxygenated hemoglobin at the level of the cortical microcirculation blood vessels. fNIRS, with its high degree of ecological validity and its very limited requirement of physical constraints to subjects, could represent a valid tool for monitoring cortical responses in the research field of neuroergonomics. In virtual reality (VR) real situations can be replicated with greater control than those obtainable in the real world. Therefore, VR is the ideal setting where studies about neuroergonomics applications can be performed. The aim of the present study was to investigate, by a 20-channel fNIRS system, the dorsolateral/ventrolateral prefrontal cortex (DLPFC/VLPFC) in subjects while performing a demanding VR hand-controlled task (HCT). Considering the complexity of the HCT, its execution should require the attentional resources allocation and the integration of different executive functions. The HCT simulates the interaction with a real, remotely-driven, system operating in a critical environment. The hand movements were captured by a high spatial and temporal resolution 3-dimensional (3D) hand-sensing device, the LEAP motion controller, a gesture-based control interface that could be used in VR for tele-operated applications. Fifteen University students were asked to guide, with their right hand/forearm, a virtual ball (VB) over a virtual route (VROU) reproducing a 42 m narrow road including some critical points. The subjects tried to travel as long as possible without making VB fall. The distance traveled by the guided VB was 70.2 ± 37.2 m. The less skilled subjects failed several times in guiding the VB over the VROU. Nevertheless, a bilateral VLPFC activation, in response to the HCT execution, was observed in all the subjects. No correlation was found between the distance traveled by the guided VB and the corresponding cortical activation. These results confirm the suitability of fNIRS technology to objectively evaluate cortical hemodynamic changes occurring in VR environments. Future studies could give a contribution to a better understanding of the cognitive mechanisms underlying human performance either in expert or non-expert operators during the simulation of different demanding/fatiguing activities.openCarrieri, Marika; Petracca, Andrea; Lancia, Stefania; Basso Moro, Sara; Brigadoi, Sabrina; Spezialetti, Matteo; Ferrari, Marco; Placidi, Giuseppe; Quaresima, ValentinaCarrieri, Marika; Petracca, Andrea; Lancia, Stefania; BASSO MORO, Sara; Brigadoi, Sabrina; Spezialetti, Matteo; Ferrari, Marco; Placidi, Giuseppe; Quaresima, Valentin

    The value of Augmented Reality in surgery — A usability study on laparoscopic liver surgery

    Get PDF
    Augmented Reality (AR) is considered to be a promising technology for the guidance of laparoscopic liver surgery. By overlaying pre-operative 3D information of the liver and internal blood vessels on the laparoscopic view, surgeons can better understand the location of critical structures. In an effort to enable AR, several authors have focused on the development of methods to obtain an accurate alignment between the laparoscopic video image and the pre-operative 3D data of the liver, without assessing the benefit that the resulting overlay can provide during surgery. In this paper, we present a study that aims to assess quantitatively and qualitatively the value of an AR overlay in laparoscopic surgery during a simulated surgical task on a phantom setup. We design a study where participants are asked to physically localise pre-operative tumours in a liver phantom using three image guidance conditions — a baseline condition without any image guidance, a condition where the 3D surfaces of the liver are aligned to the video and displayed on a black background, and a condition where video see-through AR is displayed on the laparoscopic video. Using data collected from a cohort of 24 participants which include 12 surgeons, we observe that compared to the baseline, AR decreases the median localisation error of surgeons on non-peripheral targets from 25.8 mm to 9.2 mm. Using subjective feedback, we also identify that AR introduces usability improvements in the surgical task and increases the perceived confidence of the users. Between the two tested displays, the majority of participants preferred to use the AR overlay instead of navigated view of the 3D surfaces on a separate screen. We conclude that AR has the potential to improve performance and decision making in laparoscopic surgery, and that improvements in overlay alignment accuracy and depth perception should be pursued in the future

    Aerospace Medicine and Biology: A continuing bibliography, supplement 216

    Get PDF
    One hundred twenty reports, articles, and other documents introduced into the NASA scientific and technical information system in January 1981 are listed. Topics include: sanitary problems; pharmacology; toxicology; safety and survival; life support systems; exobiology; and personnel factors

    Visual Perception and Cognition in Image-Guided Intervention

    Get PDF
    Surgical image visualization and interaction systems can dramatically affect the efficacy and efficiency of surgical training, planning, and interventions. This is even more profound in the case of minimally-invasive surgery where restricted access to the operative field in conjunction with limited field of view necessitate a visualization medium to provide patient-specific information at any given moment. Unfortunately, little research has been devoted to studying human factors associated with medical image displays and the need for a robust, intuitive visualization and interaction interfaces has remained largely unfulfilled to this day. Failure to engineer efficient medical solutions and design intuitive visualization interfaces is argued to be one of the major barriers to the meaningful transfer of innovative technology to the operating room. This thesis was, therefore, motivated by the need to study various cognitive and perceptual aspects of human factors in surgical image visualization systems, to increase the efficiency and effectiveness of medical interfaces, and ultimately to improve patient outcomes. To this end, we chose four different minimally-invasive interventions in the realm of surgical training, planning, training for planning, and navigation: The first chapter involves the use of stereoendoscopes to reduce morbidity in endoscopic third ventriculostomy. The results of this study suggest that, compared with conventional endoscopes, the detection of the basilar artery on the surface of the third ventricle can be facilitated with the use of stereoendoscopes, increasing the safety of targeting in third ventriculostomy procedures. In the second chapter, a contour enhancement technique is described to improve preoperative planning of arteriovenous malformation interventions. The proposed method, particularly when combined with stereopsis, is shown to increase the speed and accuracy of understanding the spatial relationship between vascular structures. In the third chapter, an augmented-reality system is proposed to facilitate the training of planning brain tumour resection. The results of our user study indicate that the proposed system improves subjects\u27 performance, particularly novices\u27, in formulating the optimal point of entry and surgical path independent of the sensorimotor tasks performed. In the last chapter, the role of fully-immersive simulation environments on the surgeons\u27 non-technical skills to perform vertebroplasty procedure is investigated. Our results suggest that while training surgeons may increase their technical skills, the introduction of crisis scenarios significantly disturbs the performance, emphasizing the need of realistic simulation environments as part of training curriculum
    • …
    corecore