13,920 research outputs found

    HP Windows Mixed Reality vs Meta 2: Investigating Differences in Workload and Usability for a Ball-sorting Task

    Get PDF
    Perceived workload and usability are crucial components of human-computer interactions. Currently, there is a gap in research comparing Augmented Reality (AR) and Virtual Reality (VR) systems for workload and usability. This study attempts to bridge that gap through the comparison of the HP Windows Mixed Reality system and the Meta 2 system for a ball-sorting task. Subjective questionnaires on workload and usability were implemented as comparative measures for three game scenarios of increasing difficulty. Forty-one participants were recruited from the University of Central Florida and its surrounding communities. Results showed significantly lower cumulative total workload and greater usability (for the subscale of ease of use) for the HP Windows Mixed Reality system when compared to the Meta 2 system. There were no statistically significant differences reported for the other usability subscales between the two systems. Also, there were no statistically significant differences in total workload within the three scenarios for both systems. The findings could be attributed to differences in control schemes (i.e., native handheld controllers versus hand gestures), user experience with AR and VR systems, and difficulty of task scenarios

    Rethinking Eye-blink: Assessing Task Difficulty through Physiological Representation of Spontaneous Blinking

    Get PDF
    Continuous assessment of task difficulty and mental workload is essential in improving the usability and accessibility of interactive systems. Eye tracking data has often been investigated to achieve this ability, with reports on the limited role of standard blink metrics. Here, we propose a new approach to the analysis of eye-blink responses for automated estimation of task difficulty. The core module is a time-frequency representation of eye-blink, which aims to capture the richness of information reflected on blinking. In our first study, we show that this method significantly improves the sensitivity to task difficulty. We then demonstrate how to form a framework where the represented patterns are analyzed with multi-dimensional Long Short-Term Memory recurrent neural networks for their non-linear mapping onto difficulty-related parameters. This framework outperformed other methods that used hand-engineered features. This approach works with any built-in camera, without requiring specialized devices. We conclude by discussing how Rethinking Eye-blink can benefit real-world applications.Comment: [Accepted version] In Proceedings of CHI Conference on Human Factors in Computing Systems (CHI '21), May 8-13, 2021, Yokohama, Japan. ACM, New York, NY, USA. 19 Pages. https://doi.org/10.1145/3411764.344557

    Rethinking Eye-blink: Assessing Task Difficulty through Physiological Representation of Spontaneous Blinking

    Get PDF
    Continuous assessment of task difficulty and mental workload is essential in improving the usability and accessibility of interactive systems. Eye tracking data has often been investigated to achieve this ability, with reports on the limited role of standard blink metrics. Here, we propose a new approach to the analysis of eye-blink responses for automated estimation of task difficulty. The core module is a time-frequency representation of eye-blink, which aims to capture the richness of information reflected on blinking. In our first study, we show that this method significantly improves the sensitivity to task difficulty. We then demonstrate how to form a framework where the represented patterns are analyzed with multi-dimensional Long Short-Term Memory recurrent neural networks for their non-linear mapping onto difficulty-related parameters. This framework outperformed other methods that used hand-engineered features. This approach works with any built-in camera, without requiring specialized devices. We conclude by discussing how Rethinking Eye-blink can benefit real-world applications

    You Can't Hide Behind Your Headset: User Profiling in Augmented and Virtual Reality

    Full text link
    Virtual and Augmented Reality (VR, AR) are increasingly gaining traction thanks to their technical advancement and the need for remote connections, recently accentuated by the pandemic. Remote surgery, telerobotics, and virtual offices are only some examples of their successes. As users interact with VR/AR, they generate extensive behavioral data usually leveraged for measuring human behavior. However, little is known about how this data can be used for other purposes. In this work, we demonstrate the feasibility of user profiling in two different use-cases of virtual technologies: AR everyday application (N=34N=34) and VR robot teleoperation (N=35N=35). Specifically, we leverage machine learning to identify users and infer their individual attributes (i.e., age, gender). By monitoring users' head, controller, and eye movements, we investigate the ease of profiling on several tasks (e.g., walking, looking, typing) under different mental loads. Our contribution gives significant insights into user profiling in virtual environments

    Investigating Real-time Touchless Hand Interaction and Machine Learning Agents in Immersive Learning Environments

    Get PDF
    The recent surge in the adoption of new technologies and innovations in connectivity, interaction technology, and artificial realities can fundamentally change the digital world. eXtended Reality (XR), with its potential to bridge the virtual and real environments, creates new possibilities to develop more engaging and productive learning experiences. Evidence is emerging that thissophisticated technology offers new ways to improve the learning process for better student interaction and engagement. Recently, immersive technology has garnered much attention as an interactive technology that facilitates direct interaction with virtual objects in the real world. Furthermore, these virtual objects can be surrogates for real-world teaching resources, allowing for virtual labs. Thus XR could enable learning experiences that would not bepossible in impoverished educational systems worldwide. Interestingly, concepts such as virtual hand interaction and techniques such as machine learning are still not widely investigated in immersive learning. Hand interaction technologies in virtual environments can support the kinesthetic learning pedagogical approach, and the need for its touchless interaction nature hasincreased exceptionally in the post-COVID world. By implementing and evaluating real-time hand interaction technology for kinesthetic learning and machine learning agents for self-guided learning, this research has addressed these underutilized technologies to demonstrate the efficiency of immersive learning. This thesis has explored different hand-tracking APIs and devices to integrate real-time hand interaction techniques. These hand interaction techniques and integrated machine learning agents using reinforcement learning are evaluated with different display devices to test compatibility. The proposed approach aims to provide self-guided, more productive, and interactive learning experiences. Further, this research has investigated ethics, privacy, and security issues in XR and covered the future of immersive learning in the Metaverse.<br/

    Investigating Real-time Touchless Hand Interaction and Machine Learning Agents in Immersive Learning Environments

    Get PDF
    The recent surge in the adoption of new technologies and innovations in connectivity, interaction technology, and artificial realities can fundamentally change the digital world. eXtended Reality (XR), with its potential to bridge the virtual and real environments, creates new possibilities to develop more engaging and productive learning experiences. Evidence is emerging that thissophisticated technology offers new ways to improve the learning process for better student interaction and engagement. Recently, immersive technology has garnered much attention as an interactive technology that facilitates direct interaction with virtual objects in the real world. Furthermore, these virtual objects can be surrogates for real-world teaching resources, allowing for virtual labs. Thus XR could enable learning experiences that would not bepossible in impoverished educational systems worldwide. Interestingly, concepts such as virtual hand interaction and techniques such as machine learning are still not widely investigated in immersive learning. Hand interaction technologies in virtual environments can support the kinesthetic learning pedagogical approach, and the need for its touchless interaction nature hasincreased exceptionally in the post-COVID world. By implementing and evaluating real-time hand interaction technology for kinesthetic learning and machine learning agents for self-guided learning, this research has addressed these underutilized technologies to demonstrate the efficiency of immersive learning. This thesis has explored different hand-tracking APIs and devices to integrate real-time hand interaction techniques. These hand interaction techniques and integrated machine learning agents using reinforcement learning are evaluated with different display devices to test compatibility. The proposed approach aims to provide self-guided, more productive, and interactive learning experiences. Further, this research has investigated ethics, privacy, and security issues in XR and covered the future of immersive learning in the Metaverse.<br/

    Multiple remote tower for Single European Sky: The evolution from initial operational concept to regulatory approved implementation

    Get PDF
    The European Union project of Single European Sky initiated a reorganization of European airspace and proposed additional measures for air traffic management to achieve the key objectives of improving efficiency and capacity while at the same time enhancing safety. The concept of multiple remote tower operation is that air traffic controllers (ATCOs) can control several airfields from a distant virtual control centre. The control of multiple airfields can be centralised to a virtual centre permitting the more efficient use of ATCO resources. This research was sponsored by the Single European Sky ATM Research Program and the ATM Operations Division of the Irish Aviation Authority. A safety case was developed for migration of multiple remote tower services to live operations. This research conducted 50 large scale demonstration trials of remote tower operations from single tower operations to multiple tower operations for safety assessment by air navigation safety regulators in 2016. A dedicated team of air traffic controllers and technology experts successfully completed the safety assessment of multiple remote tower operations in real time. The implementation of this innovative technology requires a careful balance between cost-efficiency and the safety of the air traffic control in terms of capacity and human performance. The live trial exercises demonstrated that the air traffic services provided by the remote tower for a single airport and two medium airports by a single ATCO with ‘in sequence’ and ‘simultaneous’ aircraft operation was at least as safe as provided by the local towers at Cork and Shannon aerodromes. No safety occurrence was reported nor did any operational safety issue arise during the conduct of the fifty live trial exercises

    A comprehensive method to design and assess mixed reality simulations

    Get PDF
    AbstractThe scientific literature highlights how Mixed Reality (MR) simulations allow obtaining several benefits in healthcare education. Simulation-based training, boosted by MR, offers an exciting and immersive learning experience that helps health professionals to acquire knowledge and skills, without exposing patients to unnecessary risks. High engagement, informational overload, and unfamiliarity with virtual elements could expose students to cognitive overload and acute stress. The implementation of effective simulation design strategies able to preserve the psychological safety of learners and the investigation of the impacts and effects of simulations are two open challenges to be faced. In this context, the present study proposes a method to design a medical simulation and evaluate its effectiveness, with the final aim to achieve the learning outcomes and do not compromise the students' psychological safety. The method has been applied in the design and development of an MR application to simulate the rachicentesis procedure for diagnostic purposes in adults. The MR application has been tested by involving twenty students of the 6th year of Medicine and Surgery of Università Politecnica delle Marche. Multiple measurement techniques such as self-report, physiological indices, and observer ratings of performance, cognitive and emotional states of learners have been implemented to improve the rigour of the study. Also, a user-experience analysis has been accomplished to discriminate between two different devices: Vox Gear Plus® and Microsoft Hololens®. To compare the results with a reference, students performed the simulation also without using the MR application. The use of MR resulted in increased stress measured by physiological parameters without a high increase in perceived workload. It satisfies the objective to enhance the realism of the simulation without generating cognitive overload, which favours productive learning. The user experience (UX) has found greater benefits in involvement, immersion, and realism; however, it has emphasized the technological limitations of devices such as obstruction, loss of depth (Vox Gear Plus), and narrow FOV (Microsoft Hololens)
    • …
    corecore