564 research outputs found

    Leveraging eXtented Reality & Human-Computer Interaction for User Experi- ence in 360◦ Video

    Get PDF
    EXtended Reality systems have resurged as a medium for work and entertainment. While 360o video has been characterized as less immersive than computer-generated VR, its realism, ease of use and affordability mean it is in widespread commercial use. Based on the prevalence and potential of the 360o video format, this research is focused on improving and augmenting the user experience of watching 360o video. By leveraging knowledge from Extented Reality (XR) systems and Human-Computer Interaction (HCI), this research addresses two issues affecting user experience in 360o video: Attention Guidance and Visually Induced Motion Sickness (VIMS). This research work relies on the construction of multiple artifacts to answer the de- fined research questions: (1) IVRUX, a tool for analysis of immersive VR narrative expe- riences; (2) Cue Control, a tool for creation of spatial audio soundtracks for 360o video, as well as enabling the collection and analysis of captured metrics emerging from the user experience; and (3) VIMS mitigation pipeline, a linear sequence of modules (including optical flow and visual SLAM among others) that control parameters for visual modi- fications such as a restricted Field of View (FoV). These artifacts are accompanied by evaluation studies targeting the defined research questions. Through Cue Control, this research shows that non-diegetic music can be spatialized to act as orientation for users. A partial spatialization of music was deemed ineffective when used for orientation. Addi- tionally, our results also demonstrate that diegetic sounds are used for notification rather than orientation. Through VIMS mitigation pipeline, this research shows that dynamic restricted FoV is statistically significant in mitigating VIMS, while mantaining desired levels of Presence. Both Cue Control and the VIMS mitigation pipeline emerged from a Research through Design (RtD) approach, where the IVRUX artifact is the product of de- sign knowledge and gave direction to research. The research presented in this thesis is of interest to practitioners and researchers working on 360o video and helps delineate future directions in making 360o video a rich design space for interaction and narrative.Sistemas de Realidade EXtendida ressurgiram como um meio de comunicação para o tra- balho e entretenimento. Enquanto que o vídeo 360o tem sido caracterizado como sendo menos imersivo que a Realidade Virtual gerada por computador, o seu realismo, facili- dade de uso e acessibilidade significa que tem uso comercial generalizado. Baseado na prevalência e potencial do formato de vídeo 360o, esta pesquisa está focada em melhorar e aumentar a experiência de utilizador ao ver vídeos 360o. Impulsionado por conhecimento de sistemas de Realidade eXtendida (XR) e Interacção Humano-Computador (HCI), esta pesquisa aborda dois problemas que afetam a experiência de utilizador em vídeo 360o: Orientação de Atenção e Enjoo de Movimento Induzido Visualmente (VIMS). Este trabalho de pesquisa é apoiado na construção de múltiplos artefactos para res- ponder as perguntas de pesquisa definidas: (1) IVRUX, uma ferramenta para análise de experiências narrativas imersivas em VR; (2) Cue Control, uma ferramenta para a criação de bandas sonoras de áudio espacial, enquanto permite a recolha e análise de métricas capturadas emergentes da experiencia de utilizador; e (3) canal para a mitigação de VIMS, uma sequência linear de módulos (incluindo fluxo ótico e SLAM visual entre outros) que controla parâmetros para modificações visuais como o campo de visão restringido. Estes artefactos estão acompanhados por estudos de avaliação direcionados para às perguntas de pesquisa definidas. Através do Cue Control, esta pesquisa mostra que música não- diegética pode ser espacializada para servir como orientação para os utilizadores. Uma espacialização parcial da música foi considerada ineficaz quando usada para a orientação. Adicionalmente, os nossos resultados demonstram que sons diegéticos são usados para notificação em vez de orientação. Através do canal para a mitigação de VIMS, esta pesquisa mostra que o campo de visão restrito e dinâmico é estatisticamente significante ao mitigar VIMS, enquanto mantem níveis desejados de Presença. Ambos Cue Control e o canal para a mitigação de VIMS emergiram de uma abordagem de Pesquisa através do Design (RtD), onde o artefacto IVRUX é o produto de conhecimento de design e deu direcção à pesquisa. A pesquisa apresentada nesta tese é de interesse para profissionais e investigadores tra- balhando em vídeo 360o e ajuda a delinear futuras direções em tornar o vídeo 360o um espaço de design rico para a interação e narrativa

    EVALUATION OF VISUALLY INDUCED MOTION SICKNESS CAUSED BY VIEWING OF 3D STEREOSCOPY USING ELECTROENCEPHALOGRAPHY TECHNIQUE

    Get PDF
    The 3D movies are attracting the viewers as they see objects flying out of the screen. However, many viewers reportof problems that they face after watching 3D movies. Visual fatigue, eye strain, headaches, dizziness, blurred vision or in other words, Visually Induced Motion Sickness (VIMS) are reported by viewers of 3D movies. In this thesis, we aim to compare a 3D passive technology with a conventional 2D technology to find whether 3D is causing trouble in the viewers or not

    Taking real steps in virtual nature: a randomized blinded trial

    Get PDF
    Studies show that green exercise (i.e., physical activity in the presence of nature) can provide the synergistic psychophysiological benefits of both physical exercise and nature exposure. The present study aimed to investigate the extent to which virtual green exercise may extend these benefits to people that are unable to engage in active visits to natural environments, as well as to promote enhanced exercise behavior. After watching a video validated to elicit sadness, participants either performed a treadmill walk while exposed to one of two virtual conditions, which were created using different techniques (360° video or 3D model), or walked on a treadmill while facing a blank wall (control). Quantitative and qualitative data were collected in relation to three overarching themes: “Experience,” “Physical engagement” and “Psychophysiological recovery.” Compared to control, greater enjoyment was found in the 3D model, while lower walking speed was found in the 360° video. No significant differences among conditions were found with respect to heart rate, perceived exertion, or changes in blood pressure and affect. The analysis of qualitative data provided further understanding on the participants’ perceptions and experiences. These findings indicate that 3D model-based virtual green exercise can provide some additional benefits compared to indoor exercise, while 360° video-based virtual green exercise may result in lower physical engagement.publishedVersio

    Presence and Cybersickness in Virtual Reality Are Negatively Related: A Review

    Get PDF
    In order to take advantage of the potential offered by the medium of virtual reality (VR), it will be essential to develop an understanding of how to maximize the desirable experience of “presence” in a virtual space (“being there”), and how to minimize the undesirable feeling of “cybersickness” (a constellation of discomfort symptoms experienced in VR). Although there have been frequent reports of a possible link between the observer’s sense of presence and the experience of bodily discomfort in VR, the amount of literature that discusses the nature of the relationship is limited. Recent research has underlined the possibility that these variables have shared causes, and that both factors may be manipulated with a single approach. This review paper summarizes the concepts of presence and cybersickness and highlights the strengths and gaps in our understanding about their relationship. We review studies that have measured the association between presence and cybersickness, and conclude that the balance of evidence favors a negative relationship between the two factors which is driven principally by sensory integration processes. We also discuss how system immersiveness might play a role in modulating both presence and cybersickness. However, we identify a serious absence of high-powered studies that aim to reveal the nature of this relationship. Based on this evidence we propose recommendations for future studies investigating presence, cybersickness, and other related factors

    Remote Visual Observation of Real Places Through Virtual Reality Headsets

    Get PDF
    Virtual Reality has always represented a fascinating yet powerful opportunity that has attracted studies and technology developments, especially since the latest release on the market of powerful high-resolution and wide field-of-view VR headsets. While the great potential of such VR systems is common and accepted knowledge, issues remain related to how to design systems and setups capable of fully exploiting the latest hardware advances. The aim of the proposed research is to study and understand how to increase the perceived level of realism and sense of presence when remotely observing real places through VR headset displays. Hence, to produce a set of guidelines that give directions to system designers about how to optimize the display-camera setup to enhance performance, focusing on remote visual observation of real places. The outcome of this investigation represents unique knowledge that is believed to be very beneficial for better VR headset designs towards improved remote observation systems. To achieve the proposed goal, this thesis presents a thorough investigation of existing literature and previous researches, which is carried out systematically to identify the most important factors ruling realism, depth perception, comfort, and sense of presence in VR headset observation. Once identified, these factors are further discussed and assessed through a series of experiments and usability studies, based on a predefined set of research questions. More specifically, the role of familiarity with the observed place, the role of the environment characteristics shown to the viewer, and the role of the display used for the remote observation of the virtual environment are further investigated. To gain more insights, two usability studies are proposed with the aim of defining guidelines and best practices. The main outcomes from the two studies demonstrate that test users can experience an enhanced realistic observation when natural features, higher resolution displays, natural illumination, and high image contrast are used in Mobile VR. In terms of comfort, simple scene layouts and relaxing environments are considered ideal to reduce visual fatigue and eye strain. Furthermore, sense of presence increases when observed environments induce strong emotions, and depth perception improves in VR when several monocular cues such as lights and shadows are combined with binocular depth cues. Based on these results, this investigation then presents a focused evaluation on the outcomes and introduces an innovative eye-adapted High Dynamic Range (HDR) approach, which the author believes to be of great improvement in the context of remote observation when combined with eye-tracked VR headsets. Within this purpose, a third user study is proposed to compare static HDR and eye-adapted HDR observation in VR, to assess that the latter can improve realism, depth perception, sense of presence, and in certain cases even comfort. Results from this last study confirmed the author expectations, proving that eye-adapted HDR and eye tracking should be used to achieve best visual performances for remote observation in modern VR systems

    Impact of Imaging and Distance Perception in VR Immersive Visual Experience

    Get PDF
    Virtual reality (VR) headsets have evolved to include unprecedented viewing quality. Meanwhile, they have become lightweight, wireless, and low-cost, which has opened to new applications and a much wider audience. VR headsets can now provide users with greater understanding of events and accuracy of observation, making decision-making faster and more effective. However, the spread of immersive technologies has shown a slow take-up, with the adoption of virtual reality limited to a few applications, typically related to entertainment. This reluctance appears to be due to the often-necessary change of operating paradigm and some scepticism towards the "VR advantage". The need therefore arises to evaluate the contribution that a VR system can make to user performance, for example to monitoring and decision-making. This will help system designers understand when immersive technologies can be proposed to replace or complement standard display systems such as a desktop monitor. In parallel to the VR headsets evolution there has been that of 360 cameras, which are now capable to instantly acquire photographs and videos in stereoscopic 3D (S3D) modality, with very high resolutions. 360° images are innately suited to VR headsets, where the captured view can be observed and explored through the natural rotation of the head. Acquired views can even be experienced and navigated from the inside as they are captured. The combination of omnidirectional images and VR headsets has opened to a new way of creating immersive visual representations. We call it: photo-based VR. This represents a new methodology that combines traditional model-based rendering with high-quality omnidirectional texture-mapping. Photo-based VR is particularly suitable for applications related to remote visits and realistic scene reconstruction, useful for monitoring and surveillance systems, control panels and operator training. The presented PhD study investigates the potential of photo-based VR representations. It starts by evaluating the role of immersion and user’s performance in today's graphical visual experience, to then use it as a reference to develop and evaluate new photo-based VR solutions. With the current literature on photo-based VR experience and associated user performance being very limited, this study builds new knowledge from the proposed assessments. We conduct five user studies on a few representative applications examining how visual representations can be affected by system factors (camera and display related) and how it can influence human factors (such as realism, presence, and emotions). Particular attention is paid to realistic depth perception, to support which we develop target solutions for photo-based VR. They are intended to provide users with a correct perception of space dimension and objects size. We call it: true-dimensional visualization. The presented work contributes to unexplored fields including photo-based VR and true-dimensional visualization, offering immersive system designers a thorough comprehension of the benefits, potential, and type of applications in which these new methods can make the difference. This thesis manuscript and its findings have been partly presented in scientific publications. In particular, five conference papers on Springer and the IEEE symposia, [1], [2], [3], [4], [5], and one journal article in an IEEE periodical [6], have been published

    Spatial Displays and Spatial Instruments

    Get PDF
    The conference proceedings topics are divided into two main areas: (1) issues of spatial and picture perception raised by graphical electronic displays of spatial information; and (2) design questions raised by the practical experience of designers actually defining new spatial instruments for use in new aircraft and spacecraft. Each topic is considered from both a theoretical and an applied direction. Emphasis is placed on discussion of phenomena and determination of design principles

    Spatial cognition in virtual environments

    Get PDF
    Since the last decades of the past century, Virtual Reality (VR) has been developed also as a methodology in research, besides a set of helpful applications in medical field (trainings for surgeons, but also rehabilitation tools). In science, there is still no agreement if the use of this technology in research on cognitive processes allows us to generalize results found in a Virtual Environment (VE) to the human behavior or cognition in the real world. This happens because of a series of differences found in basic perceptual processes (for example, depth perception) suggest a big difference in visual environmental representation capabilities of Virtual scenarios. On the other side, in literature quite a lot of studies can be found, which give a proof of VEs reliability in more than one field (trainings and rehabilitation, but also in some research paradigms). The main aim of this thesis is to investigate if, and in which cases, these two different views can be integrated and shed a new light and insights on the use of VR in research. Through the many experiments conducted in the "Virtual Development and Training Center" of the Fraunhofer Institute in Magdeburg, we addressed both low-level spatial processes (within an "evaluation of distances paradigm") and high-level spatial cognition (using a navigation and visuospatial planning task, called "3D Maps"), trying to address, at the same time, also practical problems as, for example, the use of stereoscopy in VEs or the problem of "Simulator Sickness" during navigation in immersive VEs. The results obtained with our research fill some gaps in literature about spatial cognition in VR and allow us to suggest that the use of VEs in research is quite reliable, mainly if the investigated processes are from the higher level of complexity. In this case, in fact, human brain "adapts" pretty well even to a "new" reality like the one offered by the VR, providing of course a familiarization period and the possibility to interact with the environment; the behavior will then be “like if” the environment was real: what is strongly lacking, at the moment, is the possibility to give a completely multisensorial experience, which is a very important issue in order to get the best from this kind of “visualization” of an artificial world. From a low-level point of view, we can confirm what already found in literature, that there are some basic differences in how our visual system perceives important spatial cues as depth and relationships between objects, and, therefore, we cannot talk about "similar environments" talking about VR and reality. The idea that VR is a "different" reality, offering potentially unlimited possibilities of use, even overcoming some physical limits of the real world, in which this "new" reality can be acquired by our cognitive system just by interacting with it, is therefore discussed in the conclusions of this work

    Spatial cognition in virtual environments

    Get PDF
    Since the last decades of the past century, Virtual Reality (VR) has been developed also as a methodology in research, besides a set of helpful applications in medical field (trainings for surgeons, but also rehabilitation tools). In science, there is still no agreement if the use of this technology in research on cognitive processes allows us to generalize results found in a Virtual Environment (VE) to the human behavior or cognition in the real world. This happens because of a series of differences found in basic perceptual processes (for example, depth perception) suggest a big difference in visual environmental representation capabilities of Virtual scenarios. On the other side, in literature quite a lot of studies can be found, which give a proof of VEs reliability in more than one field (trainings and rehabilitation, but also in some research paradigms). The main aim of this thesis is to investigate if, and in which cases, these two different views can be integrated and shed a new light and insights on the use of VR in research. Through the many experiments conducted in the "Virtual Development and Training Center" of the Fraunhofer Institute in Magdeburg, we addressed both low-level spatial processes (within an "evaluation of distances paradigm") and high-level spatial cognition (using a navigation and visuospatial planning task, called "3D Maps"), trying to address, at the same time, also practical problems as, for example, the use of stereoscopy in VEs or the problem of "Simulator Sickness" during navigation in immersive VEs. The results obtained with our research fill some gaps in literature about spatial cognition in VR and allow us to suggest that the use of VEs in research is quite reliable, mainly if the investigated processes are from the higher level of complexity. In this case, in fact, human brain "adapts" pretty well even to a "new" reality like the one offered by the VR, providing of course a familiarization period and the possibility to interact with the environment; the behavior will then be “like if” the environment was real: what is strongly lacking, at the moment, is the possibility to give a completely multisensorial experience, which is a very important issue in order to get the best from this kind of “visualization” of an artificial world. From a low-level point of view, we can confirm what already found in literature, that there are some basic differences in how our visual system perceives important spatial cues as depth and relationships between objects, and, therefore, we cannot talk about "similar environments" talking about VR and reality. The idea that VR is a "different" reality, offering potentially unlimited possibilities of use, even overcoming some physical limits of the real world, in which this "new" reality can be acquired by our cognitive system just by interacting with it, is therefore discussed in the conclusions of this work
    corecore