3,072 research outputs found

    Effectiveness and User Experience of Augmented and Mixed Reality for Procedural Task Training

    Get PDF
    Use of augmented reality (AR) and mixed reality (MR) technologies for training is increasing, due in part to opportunities for increased immersion, safer training, and reduced costs. However, AR/MR training effectiveness and user experience, particularly for head-mounted displays (HMDs), is not well understood. The purpose of this study is to investigate user perceptions and retention of AR/MR training delivered through a HMD for a procedural task. This two-part study utilized a within-subjects experimental design with 30 participants to determine how instruction method (paper vs. AR vs. MR) and time of procedure recall (immediate vs. post-test vs. retention) influenced completion time, perceived task difficulty, perceived confidence in successfully completing the task, workload, user experience, and trainee reactions. Results indicate differences between instruction methods for user experience and preference, with significantly higher user experience ratings for MR and lower preference rankings for AR. Findings also show decreased performance, increased perceived task difficulty, and decreased confidence as time since training increased, with no significant differences in these measures between instruction methods. Completion times and workload were also found to be comparable between instruction methods. This work provides insight into objective and subjective differences between paper-, AR-, and MR-based training experiences, which can be used to determine which type of training is best suited for a particular use case. Recommendations for appropriately matching training modalities and scenarios, as well as for how to successfully design AR/MR training experiences, are discussed

    Performance Factors in Neurosurgical Simulation and Augmented Reality Image Guidance

    Get PDF
    Virtual reality surgical simulators have seen widespread adoption in an effort to provide safe, cost-effective and realistic practice of surgical skills. However, the majority of these simulators focus on training low-level technical skills, providing only prototypical surgical cases. For many complex procedures, this approach is deficient in representing anatomical variations that present clinically, failing to challenge users’ higher-level cognitive skills important for navigation and targeting. Surgical simulators offer the means to not only simulate any case conceivable, but to test novel approaches and examine factors that influence performance. Unfortunately, there is a void in the literature surrounding these questions. This thesis was motivated by the need to expand the role of surgical simulators to provide users with clinically relevant scenarios and evaluate human performance in relation to image guidance technologies, patient-specific anatomy, and cognitive abilities. To this end, various tools and methodologies were developed to examine cognitive abilities and knowledge, simulate procedures, and guide complex interventions all within a neurosurgical context. The first chapter provides an introduction to the material. The second chapter describes the development and evaluation of a virtual anatomical training and examination tool. The results suggest that learning occurs and that spatial reasoning ability is an important performance predictor, but subordinate to anatomical knowledge. The third chapter outlines development of automation tools to enable efficient simulation studies and data management. In the fourth chapter, subjects perform abstract targeting tasks on ellipsoid targets with and without augmented reality guidance. While the guidance tool improved accuracy, performance with the tool was strongly tied to target depth estimation – an important consideration for implementation and training with similar guidance tools. In the fifth chapter, neurosurgically experienced subjects were recruited to perform simulated ventriculostomies. Results showed anatomical variations influence performance and could impact outcome. Augmented reality guidance showed no marked improvement in performance, but exhibited a mild learning curve, indicating that additional training may be warranted. The final chapter summarizes the work presented. Our results and novel evaluative methodologies lay the groundwork for further investigation into simulators as versatile research tools to explore performance factors in simulated surgical procedures

    A multilevel model for movement rehabilitation in Traumatic Brain Injury (TBI) using virtual environments

    Get PDF
    This paper presents a conceptual model for movement rehabilitation of traumatic brain injury (TBI) using virtual environments. This hybrid model integrates principles from ecological systems theory with recent advances in cognitive neuroscience, and supports a multilevel approach to both assessment and treatment. Performance outcomes at any stage of recovery are determined by the interplay of task, individual, and environmental/contextual factors. We argue that any system of rehabilitation should provide enough flexibility for task and context factors to be varied systematically, based on the current neuromotor and biomechanical capabilities of the performer or patient. Thus, in order to understand how treatment modalities are to be designed and implemented, there is a need to understand the function of brain systems that support learning at a given stage of recovery, and the inherent plasticity of the system. We know that virtual reality (VR) systems allow training environments to be presented in a highly automated, reliable, and scalable way. Presentation of these virtual environments (VEs) should permit movement analysis at three fundamental levels of behaviour: (i) neurocognitive bases of performance (we focus in particular on the development and use of internal models for action which support adaptive, on-line control); (ii) movement forms and patterns that describe the patients' movement signature at a given stage of recovery (i.e, kinetic and kinematic markers of movement proficiency), (iii) functional outcomes of the movement. Each level of analysis can also map quite seamlessly to different modes of treatment. At the neurocognitive level, for example, semi-immersive VEs can help retrain internal modeling processes by reinforcing the patients' sense of multimodal space (via augmented feedback), their position within it, and the ability to predict and control actions flexibly (via movement simulation and imagery training). More specifically, we derive four - key therapeutic environment concepts (or Elements) presented using VR technologies: Embodiment (simulation and imagery), Spatial Sense (augmenting position sense), Procedural (automaticity and dual-task control), and Participatory (self-initiated action). The use of tangible media/objects, force transduction, and vision-based tracking systems for the augmentation of gestures and physical presence will be discussed in this context

    Motor Learning Deficits in Parkinson\u27s Disease (PD) and Their Effect on Training Response in Gait and Balance: A Narrative Review

    Get PDF
    Parkinson\u27s disease (PD) is a neurological disorder traditionally associated with degeneration of the dopaminergic neurons within the substantia nigra, which results in bradykinesia, rigidity, tremor, and postural instability and gait disability (PIGD). The disorder has also been implicated in degradation of motor learning. While individuals with PD are able to learn, certain aspects of learning, especially automatic responses to feedback, are faulty, resulting in a reliance on feedforward systems of movement learning and control. Because of this, patients with PD may require more training to achieve and retain motor learning and may require additional sensory information or motor guidance in order to facilitate this learning. Furthermore, they may be unable to maintain these gains in environments and situations in which conscious effort is divided (such as dual-tasking). These shortcomings in motor learning could play a large part in degenerative gait and balance symptoms often seen in the disease, as patients are unable to adapt to gradual sensory and motor degradation. Research has shown that physical and exercise therapy can help patients with PD to adapt new feedforward strategies to partially counteract these symptoms. In particular, balance, treadmill, resistance, and repeated perturbation training therapies have been shown to improve motor patterns in PD. However, much research is still needed to determine which of these therapies best alleviates which symptoms of PIGD, the needed dose and intensity of these therapies, and long-term retention effects. The benefits of such technologies as augmented feedback, motorized perturbations, virtual reality, and weight-bearing assistance are also of interest. This narrative review will evaluate the effect of PD on motor learning and the effect of motor learning deficits on response to physical therapy and training programs, focusing specifically on features related to PIGD. Potential methods to strengthen therapeutic effects will be discussed

    An Overview of Self-Adaptive Technologies Within Virtual Reality Training

    Get PDF
    This overview presents the current state-of-the-art of self-adaptive technologies within virtual reality (VR) training. Virtual reality training and assessment is increasingly used for five key areas: medical, industrial & commercial training, serious games, rehabilitation and remote training such as Massive Open Online Courses (MOOCs). Adaptation can be applied to five core technologies of VR including haptic devices, stereo graphics, adaptive content, assessment and autonomous agents. Automation of VR training can contribute to automation of actual procedures including remote and robotic assisted surgery which reduces injury and improves accuracy of the procedure. Automated haptic interaction can enable tele-presence and virtual artefact tactile interaction from either remote or simulated environments. Automation, machine learning and data driven features play an important role in providing trainee-specific individual adaptive training content. Data from trainee assessment can form an input to autonomous systems for customised training and automated difficulty levels to match individual requirements. Self-adaptive technology has been developed previously within individual technologies of VR training. One of the conclusions of this research is that while it does not exist, an enhanced portable framework is needed and it would be beneficial to combine automation of core technologies, producing a reusable automation framework for VR training

    Haptic-Enhanced Learning in Preclinical Operative Dentistry

    Get PDF
    Background: Virtual reality haptic simulators represent a new paradigm in dental education that may potentially impact the rate and efficiency of basic skill acquisition, as well as pedagogically influence the various aspects of students’ preclinical experience. However, the evidence to support their efficiency and inform their implementation is still limited. Objectives: This thesis set out to empirically examine how haptic VR simulator (Simodont®) can enhance the preclinical dental education experience particularly in the context of operative dentistry. We specify 4 distinct research themes to explore, namely: simulator validity (face, content and predictive), human factors in 3D stereoscopic display, motor skill acquisition, and curriculum integration. Methods: Chapter 3 explores the face and content validity of Simodont® haptic dental simulator among a group of postgraduate dental students. Chapter 4 examines the predictive utility of Simodont® in predicting subsequent preclinical and clinical performance. The results indicate the potential utility of the simulator in predicting future clinical dental performance among undergraduate students. Chapter 5 investigates the role of stereopsis in dentistry from two different perspectives via two studies. Chapter 6 explores the effect of qualitatively different types of pedagogical feedback on the training, transfer and retention of basic manual dexterity dental skills. The results indicate that the acquisition and retention of basic dental motor skills in novice trainees is best optimised through a combination of instructor and visualdisplay VR-driven feedback. A pedagogical model for integration of haptic dental simulator into the dental curriculum has been proposed in Chapter 7. Conclusion: The findings from this thesis provide new insights into the utility of the haptic virtual reality simulator in undergraduate preclinical dental education. Haptic simulators have promising potential as a pedagogical tool in undergraduate dentistry that complements the existing simulation methods. Integration of haptic VR simulators into the dental curriculum has to be informed by sound pedagogical principles and mapped into specific learning objectives

    Sonic interactions in virtual environments

    Get PDF
    This book tackles the design of 3D spatial interactions in an audio-centered and audio-first perspective, providing the fundamental notions related to the creation and evaluation of immersive sonic experiences. The key elements that enhance the sensation of place in a virtual environment (VE) are: Immersive audio: the computational aspects of the acoustical-space properties of Virutal Reality (VR) technologies Sonic interaction: the human-computer interplay through auditory feedback in VE VR systems: naturally support multimodal integration, impacting different application domains Sonic Interactions in Virtual Environments will feature state-of-the-art research on real-time auralization, sonic interaction design in VR, quality of the experience in multimodal scenarios, and applications. Contributors and editors include interdisciplinary experts from the fields of computer science, engineering, acoustics, psychology, design, humanities, and beyond. Their mission is to shape an emerging new field of study at the intersection of sonic interaction design and immersive media, embracing an archipelago of existing research spread in different audio communities and to increase among the VR communities, researchers, and practitioners, the awareness of the importance of sonic elements when designing immersive environments

    Sonic Interactions in Virtual Environments

    Get PDF

    Visualizing Causality in Mixed Reality for Manual Task Learning: An Exploratory Study

    Full text link
    Mixed Reality (MR) is gaining prominence in manual task skill learning due to its in-situ, embodied, and immersive experience. To teach manual tasks, current methodologies break the task into hierarchies (tasks into subtasks) and visualize the current subtask and future in terms of causality. Existing psychology literature also shows that humans learn tasks by breaking them into hierarchies. In order to understand the design space of information visualized to the learner for better task understanding, we conducted a user study with 48 users. The study was conducted using a complex assembly task, which involves learning of both actions and tool usage. We aim to explore the effect of visualization of causality in the hierarchy for manual task learning in MR by four options: no causality, event level causality, interaction level causality, and gesture level causality. The results show that the user understands and performs best when all the level of causality is shown to the user. Based on the results, we further provide design recommendations and in-depth discussions for future manual task learning systems
    • …
    corecore