991 research outputs found

    A Content-Analysis Approach for Exploring Usability Problems in a Collaborative Virtual Environment

    Get PDF
    As Virtual Reality (VR) products are becoming more widely available in the consumer market, improving the usability of these devices and environments is crucial. In this paper, we are going to introduce a framework for the usability evaluation of collaborative 3D virtual environments based on a large-scale usability study of a mixedmodality collaborative VR system. We first review previous literature about important usability issues related to collaborative 3D virtual environments, supplemented with our research in which we conducted 122 interviews after participants solved a collaborative virtual reality task. Then, building on the literature review and our results, we extend previous usability frameworks. We identified twelve different usability problems, and based on the causes of the problems, we grouped them into three main categories: VR environment-, device interaction-, and task-specific problems. The framework can be used to guide the usability evaluation of collaborative VR environments

    Usability Evaluation in Virtual Environments: Classification and Comparison of Methods

    Get PDF
    Virtual environments (VEs) are a relatively new type of human-computer interface in which users perceive and act in a three-dimensional world. The designers of such systems cannot rely solely on design guidelines for traditional two-dimensional interfaces, so usability evaluation is crucial for VEs. We present an overview of VE usability evaluation. First, we discuss some of the issues that differentiate VE usability evaluation from evaluation of traditional user interfaces such as GUIs. We also present a review of VE evaluation methods currently in use, and discuss a simple classification space for VE usability evaluation methods. This classification space provides a structured means for comparing evaluation methods according to three key characteristics: involvement of representative users, context of evaluation, and types of results produced. To illustrate these concepts, we compare two existing evaluation approaches: testbed evaluation [Bowman, Johnson, & Hodges, 1999], and sequential evaluation [Gabbard, Hix, & Swan, 1999]. We conclude by presenting novel ways to effectively link these two approaches to VE usability evaluation

    The simulator sickness questionnaire, and the erroneous zero baseline assumption

    Get PDF
    Cybersickness assessment is predominantly conducted via the Simulator Sickness Questionnaire (SSQ). Literature has highlighted that assumptions which are made concerning baseline assessment may be incorrect, especially the assumption that healthy participants enter with no or minimal associated symptoms. An online survey study was conducted to explore further this assumption amongst a general population sample (N = 93). Results for this study suggest that the current baseline assumption may be inherently incorrect

    Use of Incremental Adaptation and Habituation Regimens for Mitigating Optokinetic Side-effects

    Get PDF
    The use of incremental and repeated exposures regimens have been put forth as effective means to mitigate visually induced motion sickness based on the Dual Process Theory (DPT) (Groves & Thompson, 1970) of neural plasticity. In essence, DPT suggests that by incrementing stimulus intensity the depression opponent process should be allowed to exert greater control over the net outcome than the sensitization opponent process, thereby minimizing side-effects. This conceptual model was tested by empirically validating the effectiveness of adaptation, incremental adaptation, habituation, and incremental habituation regimens to mitigate side-effects arising from exposure to an optokinetic drum. Forty college students from the University of Central Florida participated in the experimentation and were randomly assigned to a regimen. Efforts were taken to balance distribution of participants in the treatments for gender and motion sickness susceptibility. Results indicated that overall, the application of an incremental regimen is effective in reducing side-effects (e.g. malaise, dropout rates, postural instabilities, etc.) when compared to a non-incremented regimen, whether it be a one-time or repeated exposure. Furthermore, the application of the Motion History Questionnaire (MHQ) (Graybiel & Kennedy, 1965) for identifying high and low motion sickness susceptible individuals proved effective. Finally, gender differences in motion sickness were not found in this experiment as a result of balancing susceptibility with the gender subject variable. Findings from this study can be used to aid effective design of virtual environment (VE) usage regimens in an effort to manage cybersickness. Through pre-exposure identification of susceptible individuals via the MHQ, exposure protocols can be devised that may extend limits on exposure durations, mitigate side-effects, reduce dropout rates, and possibly increase training effectiveness. This document contains a fledgling set of guidelines form VE usage that append those under development by Stanney, Kennedy, & Kingdon (In press) and other previously established guidelines form simulator use (Kennedy et al., 1987). It is believed that through proper allocation of effective VE usage regimens cybersickness can be managed, if susceptible individuals are identified prior to exposure

    Machine learning methods for the study of cybersickness: a systematic review

    Get PDF
    This systematic review offers a world-first critical analysis of machine learning methods and systems, along with future directions for the study of cybersickness induced by virtual reality (VR). VR is becoming increasingly popular and is an important part of current advances in human training, therapies, entertainment, and access to the metaverse. Usage of this technology is limited by cybersickness, a common debilitating condition experienced upon VR immersion. Cybersickness is accompanied by a mix of symptoms including nausea, dizziness, fatigue and oculomotor disturbances. Machine learning can be used to identify cybersickness and is a step towards overcoming these physiological limitations. Practical implementation of this is possible with optimised data collection from wearable devices and appropriate algorithms that incorporate advanced machine learning approaches. The present systematic review focuses on 26 selected studies. These concern machine learning of biometric and neuro-physiological signals obtained from wearable devices for the automatic identification of cybersickness. The methods, data processing and machine learning architecture, as well as suggestions for future exploration on detection and prediction of cybersickness are explored. A wide range of immersion environments, participant activity, features and machine learning architectures were identified. Although models for cybersickness detection have been developed, literature still lacks a model for the prediction of first-instance events. Future research is pointed towards goal-oriented data selection and labelling, as well as the use of brain-inspired spiking neural network models to achieve better accuracy and understanding of complex spatio-temporal brain processes related to cybersickness
    corecore