23 research outputs found

    Detecting Spatial Orientation Demands during Virtual Navigation using EEG Brain Sensing

    Get PDF
    This study shows how brain sensing can offer insight to the evaluation of human spatial orientation in virtual reality (VR) and establish a role for electroencephalogram (EEG) in virtual navigation. Research suggests that the evaluation of spatial orientation in VR benefits by goingbeyond performance measures or questionnaires to measurements of the user’s cognitive state. While EEG has emerged as a practical brain sensing technology in cognitive research, spatial orientation tasks often rely on multiple factors (e.g., reference frame used, ability to update simulated rotation, and/or left-right confusion) which may be inaccessible to this measurement. EEG has been shown to correlate with human spatial orientation in previous research. In this paper, we use convolutional neural network (CNN), an advanced technique in machine learning, to train a detection model that can identify moments in which VR users experienced some increase in spatial orientation demands in real-time. Our results demonstrate that we can indeed use machine learning technique to detect such cognitive state of increasing spatial orientation demands in virtual reality research with 96% accurate on average

    Effect of eHMI on pedestrian road crossing behavior in shared space with Automated Vehicles-A Virtual Reality study

    Full text link
    A shared space area is a low-speed urban area in which pedestrians, cyclists, and vehicles share the road, often relying on informal interaction rules and greatly expanding freedom of movement for pedestrians and cyclists. While shared space has the potential to improve pedestrian priority in urban areas, it presents unique challenges for pedestrian-AV interaction due to the absence of a clear right of way. The current study applied Virtual Reality (VR) experiments to investigate pedestrian-AV interaction in a shared space, with a particular focus on the impact of external human-machine interfaces (eHMIs) on pedestrian crossing behavior. Fifty-three participants took part in the VR experiment and three eHMI conditions were investigated: no eHMI, eHMI with a pedestrian sign on the windshield, and eHMI with a projected zebra crossing on the road. Data collected via VR and questionnaires were used for objective and subjective measures to understand pedestrian-AV interaction. The study revealed that the presence of eHMI had an impact on participants' gazing behavior but not on their crossing decisions. Additionally, participants had a positive user experience with the current VR setting and expressed a high level of trust and perceived safety during their interaction with the AV. These findings highlight the potential of utilizing VR to explore and understand pedestrian-AV interactions

    Navigation Comparison between a Real and a Virtual Museum: Time-dependent Differences using a Head Mounted Display

    Full text link
    [EN] The validity of environmental simulations depends on their capacity to replicate responses produced in physical environments. However, very few studies validate navigation differences in immersive virtual environments, even though these can radically condition space perception and therefore alter the various evoked responses. The objective of this paper is to validate environmental simulations using 3D environments and head-mounted display devices, at behavioural level through navigation. A comparison is undertaken between the free exploration of an art exhibition in a physical museum and a simulation of the same experience. As a first perception validation, the virtual museum shows a high degree of presence. Movement patterns in both `museums¿ show close similarities, and present significant differences at the beginning of the exploration in terms of the percentage of area explored and the time taken to undertake the tours. Therefore, the results show there are significant time-dependent differences in navigation patterns during the first 2 minutes of the tours. Subsequently, there are no significant differences in navigation in physical and virtual museums. These findings support the use of immersive virtual environments as empirical tools in human behavioural research at navigation level.This work was supported by the Ministerio de Economía y Competitividad de España (Project TIN2013-45736-R); Dirección General de Tráfico, Ministerio del Interior de España (Project SPIP2017-02220); and the Institut Valencià d’Art Modern.Marín-Morales, J.; Higuera-Trujillo, JL.; Juan-Ripoll, CD.; Llinares Millán, MDC.; Guixeres Provinciale, J.; Iñarra Abad, S.; Alcañiz Raya, ML. (2019). Navigation Comparison between a Real and a Virtual Museum: Time-dependent Differences using a Head Mounted Display. Interacting with Computers. 31(2):208-220. https://doi.org/10.1093/iwc/iwz018S20822031

    NaviFields: relevance fields for adaptive VR navigation

    Get PDF
    Virtual Reality allow users to explore virtual environments naturally, by moving their head and body. However, the size of the environments they can explore is limited by real world constraints, such as the tracking technology or the physical space available. Existing techniques removing these limitations often break the metaphor of natural navigation in VR (e.g. steering techniques), involve control commands (e.g., teleporting) or hinder precise navigation (e.g., scaling user's displacements). This paper proposes NaviFields, which quantify the requirements for precise navigation of each point of the environment, allowing natural navigation within relevant areas, while scaling users' displacements when travelling across non-relevant spaces. This expands the size of the navigable space, retains the natural navigation metaphor and still allows for areas with precise control of the virtual head. We present a formal description of our NaviFields technique, which we compared against two alternative solutions (i.e., homogeneous scaling and natural navigation). Our results demonstrate our ability to cover larger spaces, introduce minimal disruption when travelling across bigger distances and improve very significantly the precise control of the viewpoint inside relevant areas

    Environmental design studies on perception and simulation: an urban design approach

    Get PDF
    Perceptual simulation represents an attempt to anticipate physical reality, whereby people can experience and interpret future environments from a subjective perspective. Working on experiential simulation for urban and landscape design requires an understanding of the relationship between man and the environment from a perceptual and cognitive standpoint. In fact, only by investigating the sensing and cognitive processes behind perception can we establish an informed approach to simulation of places and their ambiances. In particular, we propose a parallelism between man/environment and man/simulation relationships, aiming at giving back a framework for replicating in simulation the multisensory aspects that occur in the perception of the physical world. Hence, the objective of this article is to present how we approach the dimension of perceptual simulation within our research and professional work as urban designers. From a methodological point of view, we explored the topic through two main tasks, namely the selection and reconstruction of the research context and the key issues of perceptual simulation finalized in the second task, i.e. the construction of a set of simulation tools for urban design, intended as a matrix of possible practical applications. In particular, the theoretical framework presented in this work consists of a selection and overview of references relevant to urban design, comprehension of the research context and delivery of the set of tools implemented within our research unit. This matrix of tools represents the novelty of this work and is intended as a practical reference for orienting the choice among different simulation tools within the urban design practice. For instance, it is important to highlight the efficacy of each type of simulation in mimicking the man/environment relationship

    Don’t worry, be active: how to facilitate the detection of errors in immersive virtual environments

    Get PDF
    The current research aims to study the link between the type of vision experienced in a collaborative immersive virtual environment (active vs. multiple passive), the type of error one looks for during a cooperative multi-user exploration of a design project (affordance vs. perceptual violations), and the type of setting in which multi-user perform (field in Experiment 1 vs. laboratory in Experiment 2). The relevance of this link is backed by the lack of conclusive evidence on an active vs. passive vision advantage in cooperative search tasks within software based on immersive virtual reality (IVR). Using a yoking paradigm based on the mixed usage of simultaneous active and multiple passive viewings, we found that the likelihood of error detection in a complex 3D environment was characterized by an active vs. multi-passive viewing advantage depending on: (1) the degree of knowledge dependence of the type of error the passive/active observers were looking for (low for perceptual violations, vs. high for affordance violations), as the advantage tended to manifest itself irrespectively from the setting for affordance, but not for perceptual violations; and (2) the degree of social desirability possibly induced by the setting in which the task was performed, as the advantage occurred irrespectively from the type of error in the laboratory (Experiment 2) but not in the field (Experiment 1) setting. Results are relevant to future development of cooperative software based on IVR used for supporting the design review. A multi-user design review experience in which designers, engineers and end-users all cooperate actively within the IVR wearing their own head mounted display, seems more suitable for the detection of relevant errors than standard systems characterized by a mixed usage of active and passive viewing

    Cybersickness: a multisensory integration perspective

    Get PDF
    In the past decade, there has been a rapid advance in Virtual Reality (VR) technology. Key to the user’s VR experience are multimodal interactions involving all senses. The human brain must integrate real-time vision, hearing, vestibular and proprioceptive inputs to produce the compelling and captivating feeling of immersion in a VR environment. A serious problem with VR is that users may develop symptoms similar to motion sickness, a malady called cybersickness. At present the underlying cause of cybersickness is not yet fully understood. Cybersickness may be due to a discrepancy between the sensory signals which provide information about the body’s orientation and motion: in many VR applications, optic flow elicits an illusory sensation of motion which tells users that they are moving in a certain direction with certain acceleration. However, since users are not actually moving, their proprioceptive and vestibular organs provide no cues of self-motion. These conflicting signals may lead to sensory discrepancies and eventually cybersickness. Here we review the current literature to develop a conceptual scheme for understanding the neural mechanisms of cybersickness. We discuss an approach to cybersickness based on sensory cue integration, focusing on the dynamic re-weighting of visual and vestibular signals for self-motion
    corecore