248 research outputs found

    VR.net: A Real-world Dataset for Virtual Reality Motion Sickness Research

    Full text link
    Researchers have used machine learning approaches to identify motion sickness in VR experience. These approaches demand an accurately-labeled, real-world, and diverse dataset for high accuracy and generalizability. As a starting point to address this need, we introduce `VR.net', a dataset offering approximately 12-hour gameplay videos from ten real-world games in 10 diverse genres. For each video frame, a rich set of motion sickness-related labels, such as camera/object movement, depth field, and motion flow, are accurately assigned. Building such a dataset is challenging since manual labeling would require an infeasible amount of time. Instead, we utilize a tool to automatically and precisely extract ground truth data from 3D engines' rendering pipelines without accessing VR games' source code. We illustrate the utility of VR.net through several applications, such as risk factor detection and sickness level prediction. We continuously expand VR.net and envision its next version offering 10X more data than the current form. We believe that the scale, accuracy, and diversity of VR.net can offer unparalleled opportunities for VR motion sickness research and beyond

    Towards Naturalistic Interfaces of Virtual Reality Systems

    Get PDF
    Interaction plays a key role in achieving realistic experience in virtual reality (VR). Its realization depends on interpreting the intents of human motions to give inputs to VR systems. Thus, understanding human motion from the computational perspective is essential to the design of naturalistic interfaces for VR. This dissertation studied three types of human motions, including locomotion (walking), head motion and hand motion in the context of VR. For locomotion, the dissertation presented a machine learning approach for developing a mechanical repositioning technique based on a 1-D treadmill for interacting with a unique new large-scale projective display, called the Wide-Field Immersive Stereoscopic Environment (WISE). The usability of the proposed approach was assessed through a novel user study that asked participants to pursue a rolling ball at variable speed in a virtual scene. In addition, the dissertation studied the role of stereopsis in avoiding virtual obstacles while walking by asking participants to step over obstacles and gaps under both stereoscopic and non-stereoscopic viewing conditions in VR experiments. In terms of head motion, the dissertation presented a head gesture interface for interaction in VR that recognizes real-time head gestures on head-mounted displays (HMDs) using Cascaded Hidden Markov Models. Two experiments were conducted to evaluate the proposed approach. The first assessed its offline classification performance while the second estimated the latency of the algorithm to recognize head gestures. The dissertation also conducted a user study that investigated the effects of visual and control latency on teleoperation of a quadcopter using head motion tracked by a head-mounted display. As part of the study, a method for objectively estimating the end-to-end latency in HMDs was presented. For hand motion, the dissertation presented an approach that recognizes dynamic hand gestures to implement a hand gesture interface for VR based on a static head gesture recognition algorithm. The proposed algorithm was evaluated offline in terms of its classification performance. A user study was conducted to compare the performance and the usability of the head gesture interface, the hand gesture interface and a conventional gamepad interface for answering Yes/No questions in VR. Overall, the dissertation has two main contributions towards the improvement of naturalism of interaction in VR systems. Firstly, the interaction techniques presented in the dissertation can be directly integrated into existing VR systems offering more choices for interaction to end users of VR technology. Secondly, the results of the user studies of the presented VR interfaces in the dissertation also serve as guidelines to VR researchers and engineers for designing future VR systems

    Leveraging eXtented Reality & Human-Computer Interaction for User Experi- ence in 360◦ Video

    Get PDF
    EXtended Reality systems have resurged as a medium for work and entertainment. While 360o video has been characterized as less immersive than computer-generated VR, its realism, ease of use and affordability mean it is in widespread commercial use. Based on the prevalence and potential of the 360o video format, this research is focused on improving and augmenting the user experience of watching 360o video. By leveraging knowledge from Extented Reality (XR) systems and Human-Computer Interaction (HCI), this research addresses two issues affecting user experience in 360o video: Attention Guidance and Visually Induced Motion Sickness (VIMS). This research work relies on the construction of multiple artifacts to answer the de- fined research questions: (1) IVRUX, a tool for analysis of immersive VR narrative expe- riences; (2) Cue Control, a tool for creation of spatial audio soundtracks for 360o video, as well as enabling the collection and analysis of captured metrics emerging from the user experience; and (3) VIMS mitigation pipeline, a linear sequence of modules (including optical flow and visual SLAM among others) that control parameters for visual modi- fications such as a restricted Field of View (FoV). These artifacts are accompanied by evaluation studies targeting the defined research questions. Through Cue Control, this research shows that non-diegetic music can be spatialized to act as orientation for users. A partial spatialization of music was deemed ineffective when used for orientation. Addi- tionally, our results also demonstrate that diegetic sounds are used for notification rather than orientation. Through VIMS mitigation pipeline, this research shows that dynamic restricted FoV is statistically significant in mitigating VIMS, while mantaining desired levels of Presence. Both Cue Control and the VIMS mitigation pipeline emerged from a Research through Design (RtD) approach, where the IVRUX artifact is the product of de- sign knowledge and gave direction to research. The research presented in this thesis is of interest to practitioners and researchers working on 360o video and helps delineate future directions in making 360o video a rich design space for interaction and narrative.Sistemas de Realidade EXtendida ressurgiram como um meio de comunicação para o tra- balho e entretenimento. Enquanto que o vídeo 360o tem sido caracterizado como sendo menos imersivo que a Realidade Virtual gerada por computador, o seu realismo, facili- dade de uso e acessibilidade significa que tem uso comercial generalizado. Baseado na prevalência e potencial do formato de vídeo 360o, esta pesquisa está focada em melhorar e aumentar a experiência de utilizador ao ver vídeos 360o. Impulsionado por conhecimento de sistemas de Realidade eXtendida (XR) e Interacção Humano-Computador (HCI), esta pesquisa aborda dois problemas que afetam a experiência de utilizador em vídeo 360o: Orientação de Atenção e Enjoo de Movimento Induzido Visualmente (VIMS). Este trabalho de pesquisa é apoiado na construção de múltiplos artefactos para res- ponder as perguntas de pesquisa definidas: (1) IVRUX, uma ferramenta para análise de experiências narrativas imersivas em VR; (2) Cue Control, uma ferramenta para a criação de bandas sonoras de áudio espacial, enquanto permite a recolha e análise de métricas capturadas emergentes da experiencia de utilizador; e (3) canal para a mitigação de VIMS, uma sequência linear de módulos (incluindo fluxo ótico e SLAM visual entre outros) que controla parâmetros para modificações visuais como o campo de visão restringido. Estes artefactos estão acompanhados por estudos de avaliação direcionados para às perguntas de pesquisa definidas. Através do Cue Control, esta pesquisa mostra que música não- diegética pode ser espacializada para servir como orientação para os utilizadores. Uma espacialização parcial da música foi considerada ineficaz quando usada para a orientação. Adicionalmente, os nossos resultados demonstram que sons diegéticos são usados para notificação em vez de orientação. Através do canal para a mitigação de VIMS, esta pesquisa mostra que o campo de visão restrito e dinâmico é estatisticamente significante ao mitigar VIMS, enquanto mantem níveis desejados de Presença. Ambos Cue Control e o canal para a mitigação de VIMS emergiram de uma abordagem de Pesquisa através do Design (RtD), onde o artefacto IVRUX é o produto de conhecimento de design e deu direcção à pesquisa. A pesquisa apresentada nesta tese é de interesse para profissionais e investigadores tra- balhando em vídeo 360o e ajuda a delinear futuras direções em tornar o vídeo 360o um espaço de design rico para a interação e narrativa

    Improving spatial orientation in virtual reality with leaning-based interfaces

    Get PDF
    Advancement in technology has made Virtual Reality (VR) increasingly portable, affordable and accessible to a broad audience. However, large scale VR locomotion still faces major challenges in the form of spatial disorientation and motion sickness. While spatial updating is automatic and even obligatory in real world walking, using VR controllers to travel can cause disorientation. This dissertation presents two experiments that explore ways of improving spatial updating and spatial orientation in VR locomotion while minimizing cybersickness. In the first study, we compared a hand-held controller with HeadJoystick, a leaning-based interface, in a 3D navigational search task. The results showed that leaning-based interface helped participant spatially update more effectively than when using the controller. In the second study, we designed a "HyperJump" locomotion paradigm which allows to travel faster while limiting its optical flow. Not having any optical flow (as in traditional teleport paradigms) has been shown to help reduce cybersickness, but can also cause disorientation. By interlacing continuous locomotion with teleportation we showed that user can travel faster without compromising spatial updating

    The Effect of Prior Virtual Reality Experience on Locomotion and Navigation in Virtual Environments

    Get PDF
    VirtualReality(VR) is becoming more accessible and widely utilized in crucial disciplines like training, communication, healthcare, and education. One of the important parts of VR applications is walking through virtual environments. So, researchers have broadly studied various kinds of walking in VR as it can reduce sickness, improve the sense of presence, and enhance the general user experience. Due to the recent availability of consumer Head Mounted Displays (HMDs), people are using HMDs in all sorts of different locations. It underscores the need for locomotion methods that allow users to move through large Immersive Virtual Environments (IVEs) when occupying a small physical space or even seated. Although many aspects of locomotion in VR have received extensive research, very little work has considered how locomotive behaviors might change over time as users become more experienced in IVEs. As HMDs were rarely encountered outside of a lab before 2016, most locomotion research before this was likely conducted with VR novices who had no prior experience with the technology. However, as this is no longer the case, itis important to consider whether locomotive behaviors may evolve over time with user experience. This proposal specifically studies locomotive behaviors and effects that may adjust over time. For the first study, we conducted experiments measuring novice and experienced subjects’ gait parameters in VR and real environments. Prior research has established that users’ gait in virtual and real environments differs; however, little research has evaluated how users’ gait differs as users gain more experience with VR. We conducted experiments measuring novice and experienced subjects’ gait parameters in VR and real environments. Results showed that subjects’ performance in VR and Real World was more similar in the last trials than in the first trials; their walking dissimilarity in the start trials diminished by walking more trials. We found the trials a significant variable affecting the walking speed, step length, and trunk angle for both groups of users. While the main effect of expertise was not observed, an interaction effect between expertise and the trial number was shown. The trunk angle increased over time for novices but decreased for experts. These cond study reports the results of an experiment investigating how users’ behavior with two locomotion methods changed over four weeks: teleportation and joystick-based locomotion. Twenty novice VR users (no more than 1 hour prior experience with any form of walking in VR) were recruited. They loaned an Oculus Quest for four weeks on their own time, including an activity we provided them with. Results showed that the time required to complete the navigation task decreased faster for joystick-based locomotion. Spatial memory improved with time, particularly when using teleportation (which starts disadvantaged to joystick-based locomotion). Also, overall cyber sickness decreased slightly overtime; two dimensions of cyber sickness (nausea and disorientation) increased notably over time using joystick-based navigation. The next study presents the findings of a longitudinal research study investigating the effects of locomotion methods within virtual reality on participants’ spatial awareness during VR experiences and subsequent real-world gait parameters. The study encompasses two distinct environments: the real world and VR. In the real world setting, we analyze key gait parameters, including walking speed, distance traveled, and stepcount, both pre and post-VR exposure, to perceive the influence of VR locomotion on post-VR gait behavior. Additionally, we assess participants’ spatial awareness and the occurrence of simulator sickness, considering two locomotion methods: joy stick and teleportation. Our results reveal significant changes in gait parameters associated with increased VR locomotion experience. Furthermore, we observe a remarkable reduction in cyber sickness symptoms over successive VR sessions, particularly evident among participants utilizing joy stick locomotion. This study contributes to the understanding of gait behavior influenced by VR locomotion technology and the duration of VR immersion. Together, these studies inform how locomotion and navigation behavior may change in VR as users become more accustomed to walking in virtual reality settings. Also, comparative studies on locomotion methods help VR developers to implement the better-suited locomotion method. Thus, it provides knowledge to design and develop VR systems to perform better for different applications and groups of users

    Human Visual Navigation: Effects of Visual Context, Navigation Mode, and Gender

    Get PDF
    Abstract This thesis extends research on human visual path integration using optic flow cues. In three experiments, a large-scale path-completion task was contextualised within highly-textured authentic virtual environments. Real-world navigational experience was further simulated, through the inclusion of a large roundabout on the route. Three semi-surrounding screens provided a wide field of view. Participants were able to perform the task, but directional estimates showed characteristic errors, which can be explained with a model of distance misperception on the outbound roads of the route. Display and route layout parameters had very strong effects on performance. Gender and navigation mode were also influential. Participants consistently underestimated the final turn angle when simulated self-motion was viewed passively, on large projection screens in a driving simulator. Error increased with increasing size of the internal angle, on route layouts based on equilateral or isosceles triangles. A compressed range of responses was found. Higher overall accuracy was observed when a display with smaller desktop computer monitors was used; especially when simulated self-motion was actively controlled with a steering wheel and foot pedals, rather than viewed passively. Patterns and levels of error depended on route layout, which included triangles with non-equivalent lengths of the two outbound roads. A powerful effect on performance was exerted by the length of the "approach segment" on the route: that is, the distance travelled on the first outbound road, combined with the distance travelled between the two outbound roads on the roundabout curve. The final turn angle was generally overestimated on routes with a long approach segment (those with a long first road and a 60° or 90° internal angle), and underestimated on routes with a short approach segment (those with a short first road or the 120° internal angle). Accuracy was higher for active participants on routes with longer approach segments and on 90° angle trials, and for passive participants on routes with shorter approach segments and on 120° angle trials. Active participants treated all internal angles as 90° angles. Participants performed with lower overall accuracy when optic flow information was disrupted, through the intermittent presentation of self-motion on the small-screen display, in a sequence of static snapshots of the route. Performance was particularly impaired on routes with a long approach segment, but quite accurate on those with a short approach segment. Consistent overestimation of the final angle was observed, and error decreased with increasing size of the internal angle. Participants treated all internal angles as 120° angles. The level of available visual information did not greatly affect estimates, in general. The degree of curvature on the roundabout mainly influenced estimates by female participants in the Passive condition. Compared with males, females performed less accurately in the driving simulator, and with reduced optic flow cues; but more accurately with the small-screen display on layouts with a short approach segment, and when they had active control of the self-motion. The virtual environments evoked a sense of presence, but this had no effect on task performance, in general. The environments could be used for training navigational skills where high precision is not required

    Чинники, що спричиняють кіберхвороби

    Get PDF
    The section discusses factors impacting cybersickness.У розділі розглянуто чинники, що спричиняють кіберхвороби
    corecore