1,135 research outputs found

    L’effet des outils pédagogiques sur la compréhension spatiale dans l’enseignement des sciences de la santé : une revue systématique

    Get PDF
    Background: The concept of spatial orientation is integral to health education. Students studying to be healthcare professionals use their visual intelligence to develop 3D mental models from 2D images, like X-rays, MRI, and CT scans, which exerts a heavy cognitive load on them. Innovative teaching tools and technologies are being developed to improve students’ learning experiences. However, the impact of these teaching modalities on spatial understanding is not often evaluated. This systematic review aims to investigate current literature to identify which teaching tools and techniques are intended to improve the 3D sense of students and how these tools impact learners’ spatial understanding. Methods: The preferred reporting items for systematic reviews and meta-analysis (PRISMA) guidelines were followed for the systematic review. Four databases were searched with multiple search terms. The articles were screened based on inclusion and exclusion criteria and assessed for quality. Results: Nineteen articles were eligible for our systematic review. Teaching tools focused on improving spatial concepts can be grouped into five categories. The review findings reveal that the experimental groups have performed equally well or significantly better in tests and tasks with access to the teaching tool than the control groups. Conclusion: Our review investigated the current literature to identify and categorize teaching tools shown to improve spatial understanding in healthcare professionals. The teaching tools identified in our review showed improvement in measured, and perceived spatial intelligence. However, a wide variation exists among the teaching tools and assessment techniques. We also identified knowledge gaps and future research opportunities.Contexte : Le concept d’orientation spatiale fait partie intégrante de l’enseignement des professions de la santé. Les étudiants qui s’y destinent utilisent leur intelligence visuelle pour se représenter mentalement en 3D des images en 2D comme la radiographie, l’IRM et le tomodensitogramme, ce qui constitue une lourde charge cognitive. On développe actuellement des technologies et des outils pédagogiques innovants pour améliorer l’expérience d’apprentissage des étudiants. Cependant, l’impact de ces ressources pédagogiques sur la compréhension spatiale est rarement évalué. L’objectif de cette revue systématique de la littérature était de recenser les outils et techniques pédagogiques destinés à améliorer la perception 3D des apprenants et d’examiner les effets de ces outils sur leur compréhension spatiale. Méthodes : Suivant les lignes directrices PRISMA (preferred reporting items for systematic reviews and meta-analysis), nous avons consulté quatre bases de données avec des termes de recherche multiples, examiné les articles recensés en fonction de critères d’inclusion et d’exclusion, et évalué leur qualité. Résultats : Dix-neuf articles correspondaient aux critères d’inclusion. Les outils pédagogiques axés sur l’amélioration des concepts spatiaux peuvent être regroupés en cinq catégories. L’examen a révélé que les résultats obtenus par les groupes expérimentaux ayant utilisé l’outil pédagogique pour effectuer les tests et les tâches sont aussi bons ou significativement meilleurs que les résultats obtenus par les groupes témoins. Conclusion : Notre revue de la littérature visant à recenser et catégoriser les outils pédagogiques a montré que ces derniers améliorent la compréhension spatiale, notamment l’intelligence spatiale mesurée et perçue, des professionnels de la santé. Toutefois, il existe une grande variation entre les divers outils pédagogiques et techniques d’évaluation. Nous avons également relevé les lacunes dans les connaissances et les pistes de recherche future

    VR4Health: personalized teaching and learning anatomy using VR

    Get PDF
    Virtual Reality (VR) is being integrated into many different areas of our lives, from industrial engineering to video-games, and also including teaching and education. We have several examples where VR has been used to engage students and facilitate their 3D spatial understanding, but can VR help also teachers? What are the benefits teachers can obtain on using VR applications? In this paper we present an application (VR4Health) designed to allow students to directly inspect 3D models of several human organs by using Virtual Reality systems. The application is designed to be used in an HMD device autonomously as a self-learning tool and also reports information to teachers in order that he/she becomes aware of what the students do and can redirect his/her work to the concrete necessities of the student. We evaluate both the students’ and the teachers’ perception by doing an experiment and asking them to fill-in a questionnaire at the end of the experiment.This study was partially funded by the Spanish Ministry of Science and Innovation (grant number TIN2017-88515-C2-1-R).Peer ReviewedPostprint (published version

    Computational requirements of the virtual patient

    Get PDF
    Medical visualization in a hospital can be used to aid training, diagnosis, and pre- and intra-operative planning. In such an application, a virtual representation of a patient is needed that is interactive, can be viewed in three dimensions (3D), and simulates physiological processes that change over time. This paper highlights some of the computational challenges of implementing a real time simulation of a virtual patient, when accuracy can be traded-off against speed. Illustrations are provided using projects from our research based on Grid-based visualization, through to use of the Graphics Processing Unit (GPU)

    An Endoscope Interface for Immersive Virtual Reality

    Get PDF
    This is the accepted version of the following article: John, N.W., Day, T.W., & Wardle, T. (2020). An Endoscope Interface for Immersive Virtual Reality. Eurographics Workshop on Visualization for Biology and Medicine, Eurographics Association, which has been published in final form at http://onlinelibrary.wiley.com. This article may be used for non-commercial purposes in accordance with the Wiley Self-Archiving PolicyThis is a work in progress paper that describes a novel endoscope interface designed for use in an immersive virtual reality surgical simulator. We use an affordable off the shelf head mounted display to recreate the operating theatre environment. A hand held controller has been adapted so that it feels like the trainee is holding an endoscope controller with the same functionality. The simulator allows the endoscope shaft to be inserted into a virtual patient and pushed forward to a target position. The paper describes how we have built this surgical simulator with the intention of carrying out a full clinical study in the near future

    Augmented Reality and Robotics: A Survey and Taxonomy for AR-enhanced Human-Robot Interaction and Robotic Interfaces

    Get PDF
    This paper contributes to a taxonomy of augmented reality and robotics based on a survey of 460 research papers. Augmented and mixed reality (AR/MR) have emerged as a new way to enhance human-robot interaction (HRI) and robotic interfaces (e.g., actuated and shape-changing interfaces). Recently, an increasing number of studies in HCI, HRI, and robotics have demonstrated how AR enables better interactions between people and robots. However, often research remains focused on individual explorations and key design strategies, and research questions are rarely analyzed systematically. In this paper, we synthesize and categorize this research field in the following dimensions: 1) approaches to augmenting reality; 2) characteristics of robots; 3) purposes and benefits; 4) classification of presented information; 5) design components and strategies for visual augmentation; 6) interaction techniques and modalities; 7) application domains; and 8) evaluation strategies. We formulate key challenges and opportunities to guide and inform future research in AR and robotics

    How, for Whom, and in Which Contexts or Conditions Augmented and Virtual Reality Training Works in Upskilling Health Care Workers: Realist Synthesis

    Get PDF
    BACKGROUND: Using traditional simulators (eg, cadavers, animals, or actors) to upskill health workers is becoming less common because of ethical issues, commitment to patient safety, and cost and resource restrictions. Virtual reality (VR) and augmented reality (AR) may help to overcome these barriers. However, their effectiveness is often contested and poorly understood and warrants further investigation. OBJECTIVE: The aim of this review is to develop, test, and refine an evidence-informed program theory on how, for whom, and to what extent training using AR or VR works for upskilling health care workers and to understand what facilitates or constrains their implementation and maintenance. METHODS: We conducted a realist synthesis using the following 3-step process: theory elicitation, theory testing, and theory refinement. We first searched 7 databases and 11 practitioner journals for literature on AR or VR used to train health care staff. In total, 80 papers were identified, and information regarding context-mechanism-outcome (CMO) was extracted. We conducted a narrative synthesis to form an initial program theory comprising of CMO configurations. To refine and test this theory, we identified empirical studies through a second search of the same databases used in the first search. We used the Mixed Methods Appraisal Tool to assess the quality of the studies and to determine our confidence in each CMO configuration. RESULTS: Of the 41 CMO configurations identified, we had moderate to high confidence in 9 (22%) based on 46 empirical studies reporting on VR, AR, or mixed simulation training programs. These stated that realistic (high-fidelity) simulations trigger perceptions of realism, easier visualization of patient anatomy, and an interactive experience, which result in increased learner satisfaction and more effective learning. Immersive VR or AR engages learners in deep immersion and improves learning and skill performance. When transferable skills and knowledge are taught using VR or AR, skills are enhanced and practiced in a safe environment, leading to knowledge and skill transfer to clinical practice. Finally, for novices, VR or AR enables repeated practice, resulting in technical proficiency, skill acquisition, and improved performance. The most common barriers to implementation were up-front costs, negative attitudes and experiences (ie, cybersickness), developmental and logistical considerations, and the complexity of creating a curriculum. Facilitating factors included decreasing costs through commercialization, increasing the cost-effectiveness of training, a cultural shift toward acceptance, access to training, and leadership and collaboration. CONCLUSIONS: Technical and nontechnical skills training programs using AR or VR for health care staff may trigger perceptions of realism and deep immersion and enable easier visualization, interactivity, enhanced skills, and repeated practice in a safe environment. This may improve skills and increase learning, knowledge, and learner satisfaction. The future testing of these mechanisms using hypothesis-driven approaches is required. Research is also required to explore implementation considerations

    Fast Elastic Registration of Soft Tissues under Large Deformations

    Get PDF
    International audienceA fast and accurate fusion of intra-operative images with a pre-operative data is a key component of computer-aided interventions which aim at improving the outcomes of the intervention while reducing the patient's discomfort. In this paper, we focus on the problematic of the intra-operative navigation during abdominal surgery, which requires an accurate registration of tissues undergoing large deformations. Such a scenario occurs in the case of partial hepatectomy: to facilitate the access to the pathology, e.g. a tumor located in the posterior part of the right lobe, the surgery is performed on a patient in lateral position. Due to the change in patient's position, the resection plan based on the pre-operative CT scan acquired in the supine position must be updated to account for the deformations. We suppose that an imaging modality, such as the cone-beam CT, provides the information about the intra-operative shape of an organ, however, due to the reduced radiation dose and contrast, the actual locations of the internal structures necessary to update the planning are not available. To this end, we propose a method allowing for fast registration of the pre-operative data represented by a detailed 3D model of the liver and its internal structure and the actual configuration given by the organ surface extracted from the intra-operative image. The algorithm behind the method combines the iterative closest point technique with a biomechanical model based on a co-rotational formulation of linear elasticity which accounts for large deformations of the tissue. The performance, robustness and accuracy of the method is quantitatively assessed on a control semi-synthetic dataset with known ground truth and a real dataset composed of nine pairs of abdominal CT scans acquired in supine and flank positions. It is shown that the proposed surface-matching method is capable of reducing the target registration error evaluated of the internal structures of the organ from more than 40 mm to less then 10 mm. Moreover, the control data is used to demonstrate the compatibility of the method with intra-operative clinical scenario, while the real datasets are utilized to study the impact of parametrization on the accuracy of the method. The method is also compared to a state-of-the art intensity-based registration technique in terms of accuracy and performance
    • …
    corecore