8 research outputs found

    Can People Not Tell Left from Right in VR? Point-to-origin Studies Revealed Qualitative Errors in Visual Path Integration

    No full text
    Even in state-of-the-art virtual reality (VR) setups, participants often feel lost when navigating through virtual environments. In psychological experiments, such disorientation is often compensated for by extensive training. The current study investigated participants' sense of direction by means of a rapid point-to-origin task without any training or performance feedback. This allowed us to study participants' intuitive spatial orientation in VR while minimizing the influence of higher cognitive abilities and compensatory strategies. After visually displayed passive excursions along one-or two-segment trajectories, participants were asked to point back to the origin of locomotion "as accurately and quickly as possible". Despite using a high-quality video projection with a 84deg times 63deg field of view, participants' overall performance was rather poor. Moreover, six of the 16 participants exhibited striking qualitative errors, i.e., consistent left-right confusions that have not been observed in comparable real world experiments. Taken together, this study suggests that even an immersive high-quality video projection system is not necessarily sufficient for enabling natural spatial orientation in VR. We propose that a rapid point-to-origin paradigm can be a useful tool for evaluating and improving the effectiveness of VR setups in terms of enabling natural and unencumbered spatial orientation and performance

    Proprioceptive accuracy in Immersive Virtual Reality: A developmental perspective

    Get PDF
    Proprioceptive development relies on a variety of sensory inputs, among which vision is hugely dominant. Focusing on the developmental trajectory underpinning the integration of vision and proprioception, the present research explores how this integration is involved in interactions with Immersive Virtual Reality (IVR) by examining how proprioceptive accuracy is affected by Age, Perception, and Environment. Individuals from 4 to 43 years old completed a self-turning task which asked them to manually return to a previous location with different sensory modalities available in both IVR and reality. Results were interpreted from an exploratory perspective using Bayesian model comparison analysis, which allows the phenomena to be described using probabilistic statements rather than simplified reject/not-reject decisions. The most plausible model showed that 4\u20138-year-old children can generally be expected to make more proprioceptive errors than older children and adults. Across age groups, proprioceptive accuracy is higher when vision is available, and is disrupted in the visual environment provided by the IVR headset. We can conclude that proprioceptive accuracy mostly develops during the first eight years of life and that it relies largely on vision. Moreover, our findings indicate that this proprioceptive accuracy can be disrupted by the use of an IVR headset

    Orientation and metacognition in virtual space

    Get PDF
    Cognitive scientists increasingly use virtual reality scenarios to address spatial perception, orientation, and navigation. If based on desktops rather than mobile immersive environments, this involves a discrepancy between the physically experienced static position and the visually perceived dynamic scene, leading to cognitive challenges that users of virtual worlds may or may not be aware of. The frequently reported loss of orientation and worse performance in point-to-origin tasks relate to the difficulty of establishing a consistent reference system on an allocentric or egocentric basis. We address the verbalisability of spatial concepts relevant in this regard, along with the conscious strategies reported by participants. Behavioural and verbal data were collected using a perceptually sparse virtual tunnel scenario that has frequently been used to differentiate between humans' preferred reference systems. Surprisingly, the linguistic data we collected relate to reference system verbalisations known from the earlier literature only to a limited extent, but instead reveal complex cognitive mechanisms and strategies. Orientation in desktop VR appears to pose considerable challenges, which participants react to by conceptualising the task in individual ways that do not systematically relate to the generic concepts of egocentric and allocentric reference frames

    Orientation and metacognition in virtual space.

    Full text link

    Designing Immersive Virtual Environments for Cognitive Learning and Spatial Memory Tasks

    Get PDF
    University of Minnesota M.S. thesis.June 2019. Major: Computer Science. Advisor: Peter Willemsen. 1 computer file (PDF); ix, 132 pages.Virtual reality provides a realistic way to learn at a flexible progression and to develop skills that could be difficult to grasp in the real world. Our hypothesis is that there are certain VR affordances that educators and developers can leverage to build simulated learning experiences that can transform education and training activities. The immersive experience VR provides through real-time interaction, engagement, spatial awareness, visual representations, and media richness is useful for developing experiential learning environments. Watching a dinosaur egg hatch and the development of its complete life cycle in a virtual Jurassic world may provide more visual context than reading a textbook on the life cycle of the same dinosaur. The goal of this study was to better understand which interaction mechanism may be better for the design of immersive virtual learning environments. We investigated the role that natural locomotion and teleportation may have on cognitive and spatial information processing in a virtual environment. The learning space is a virtual cemetery, and it consists of thirteen tombstones with stories about the lives of the residents of spoon river, a fictional town mentioned in Spoon River Anthology by Edgar Lee Masters. We conducted experiments by placing subjects in four different conditions: teleportation across long distances, walking across long distances, teleportation across short distances and walking across short distances. Our hypotheses are that shorter natural walking paths will produce better outcomes on the cognitive assessments and spatial memory assessments we conducted. Teleportation, while beneficial for navigating virtual reality from a small, confined physical space, may not provide enough continuous spatial updating and therefore may be somewhat detrimental for certain learning environments. We analyzed the results and built a linear regression model to find any association between input and output variables. Our data analysis revealed that: • For definite memory recall and proprioception of the spatial layout of a virtual space, it is better to walk than to teleport. • To visually match objects to their spatial positions, a learning space that is logically investigated through shorter distance movements is better than longer paths. • Strong cognitive understanding is achieved if the learning space properly balances exploration of the environment and discovery of information

    Using Sound to Represent Uncertainty in Spatial Data

    Get PDF
    There is a limit to the amount of spatial data that can be shown visually in an effective manner, particularly when the data sets are extensive or complex. Using sound to represent some of these data (sonification) is a way of avoiding visual overload. This thesis creates a conceptual model showing how sonification can be used to represent spatial data and evaluates a number of elements within the conceptual model. These are examined in three different case studies to assess the effectiveness of the sonifications. Current methods of using sonification to represent spatial data have been restricted by the technology available and have had very limited user testing. While existing research shows that sonification can be done, it does not show whether it is an effective and useful method of representing spatial data to the end user. A number of prototypes show how spatial data can be sonified, but only a small handful of these have performed any user testing beyond the authors’ immediate colleagues (where n > 4). This thesis creates and evaluates sonification prototypes, which represent uncertainty using three different case studies of spatial data. Each case study is evaluated by a significant user group (between 45 and 71 individuals) who completed a task based evaluation with the sonification tool, as well as reporting qualitatively their views on the effectiveness and usefulness of the sonification method. For all three case studies, using sound to reinforce information shown visually results in more effective performance from the majority of the participants than traditional visual methods. Participants who were familiar with the dataset were much more effective at using the sonification than those who were not and an interactive sonification which requires significant involvement from the user was much more effective than a static sonification, which did not provide significant user engagement. Using sounds with a clear and easily understood scale (such as piano notes) was important to achieve an effective sonification. These findings are used to improve the conceptual model developed earlier in this thesis and highlight areas for future research
    corecore