2,040 research outputs found

    Eye movements in the wild : Oculomotor control, gaze behavior & frames of reference

    Get PDF
    Understanding the brain's capacity to encode complex visual information from a scene and to transform it into a coherent perception of 3D space and into well-coordinated motor commands are among the outstanding questions in the study of integrative brain function. Eye movement methodologies have allowed us to begin addressing these questions in increasingly naturalistic tasks, where eye and body movements are ubiquitous and, therefore, the applicability of most traditional neuroscience methods restricted. This review explores foundational issues in (1) how oculomotor and motor control in lab experiments extrapolates into more complex settings and (2) how real-world gaze behavior in turn decomposes into more elementary eye movement patterns. We review the received typology of oculomotor patterns in laboratory tasks, and how they map onto naturalistic gaze behavior (or not). We discuss the multiple coordinate systems needed to represent visual gaze strategies, how the choice of reference frame affects the description of eye movements, and the related but conceptually distinct issue of coordinate transformations between internal representations within the brain.Peer reviewe

    Updating During Lateral Movement Using Visual and Non-Visual Motion Cues

    Get PDF
    Spatial updating, the ability to track egocentric positions of surrounding objects during self-motion, is fundamental to navigating around the world. Past studies show people make systematic errors when updating after linear self-motion. To determine the source of these errors, I measured errors in remembered target position with and without passive lateral movements. I also varied the visual (Oculus Rift) and physical (motion-platform) self-motion feedback. In general, people remembered targets as less eccentric with greater underestimations for more eccentric targets. They could use physical cues for updating, but they made larger errors than when they had only visual cues. Visual motion cues alone were enough to produce updating, and physical cues were not needed when visual cues were available. Also, people remembered the targets within the range of movement as closer to the position they were perceived before moving. However, individual perceived distance of the target did not affect their updating

    Contributions of Pictorial and Binocular Cues to the Perception of Distance in Virtual Reality

    Get PDF
    We assessed the contribution of binocular disparity and the pictorial cues of linear perspective, texture, and scene clutter to the perception of distance in consumer virtual reality. As additional cues are made available, distance perception is predicted to improve, as measured by a reduction in systematic bias, and an increase in precision. We assessed (1) whether space is non-linearly distorted; (2) the degree of size constancy across changes in distance; and (3) the weighting of pictorial versus binocular cues in VR. In the first task, participants positioned two spheres so as to divide the egocentric distance to a reference stimulus (presented between 3 and 11 m) into three equal thirds. In the second and third tasks, participants set the size of a sphere, presented at the same distances and at eye-height, to match that of a hand-held football. Each task was performed in four environments varying in the available cues. We measured accuracy by identifying systematic biases in responses, and precision as the standard deviation of these responses. While there was no evidence of non-linear compression of space, participants did tend to underestimate distance linearly, but this bias was reduced with the addition of each cue. The addition of binocular cues, when rich pictorial cues were already available, reduced both the bias and variability of estimates. These results show that linear perspective and binocular cues, in particular, improve the accuracy and precision of distance estimates in virtual reality across a range of distances typical of many indoor environments

    The Effects of Gravity on Self-Motion Perception

    Get PDF
    Gravity is the most pervasive force that we encounter. For instance, we observe a variety of objects being accelerated toward the Earth by gravity, but we also experience these forces when we are simply stationaryas gravity is a constant accelerationor when we are ourselves in motion, such as when we are locomoting on foot, driving a vehicle, jumping or skiing. It follows that our ability to successfully navigate our environment must somehow take into account the effects of gravity on our body's motion-detecting sensesa dynamic relationship which changes with self-motion and self-orientation. The goal of this dissertation was to investigate how body orientation relative to gravity influences visual-vestibular interactions in visually-induced perception of self-motion (i.e., vection). Specifically, I examined this relationship by placing observers in varied postures and presenting visual displays simulating forward/backward self-motion with vertical/horizontal viewpoint oscillation, that mimics components produced by head-movements in real self-motion. I found that tilting observers reduced vection and the two viewpoint oscillations similarly enhanced vection, suggesting that current postural and oscillation-based vection findings are best explained by ecology. I also examined the influence of scene structure and alignment of the body and visual motion relative to gravity on vection. Observers in different postures viewed simulated translational self-motion displays consisting of either a single rigid structure or dots. The experimental data showed that vection depended on both posture and the perceived interpretation of the visual scene, indicating that self-motion perception is modulated by high-order cognitive processes. I also found that observers reported illusory tilt of the stimulus when they were not upright. I investigated these observer reports of a posture-dependent perceived stimulus tilt by presenting upright and tilted observers with static and motion stimuli that were tilted from the graviational vertical. Postural-dependent tilt effects were found for both these stimuli and were greater for motion experienced as self-motion than external motion. Taken together, the results of this dissertation demonstrate that our perception of self-motion is influenced by gravity, and by prior experiences and internal mental representations of our visual world

    The Role of Goals and Attention on Memory for Distance in Real and Virtual Spaces

    Get PDF
    Navigating in an environment generally involves a goal. However, to date, little is known about the influence of goals on immediate memory for distance and time in ‘cognitive maps.’ The main aim of the thesis is to investigate the role goals play in memory for distance and time experienced during movement through a range of types of environment, and to begin to unpack the mechanisms at play. A secondary goal of the thesis is to examine the fidelity of virtual environments with respect to memory for distance and time. There has been a recent surge in the utilisation of Virtual Reality (VR) in research and practice. However, it remains unclear to what extent spatial behaviour in virtual environments captures the experience of Real Space. The environments tested in the thesis allow direct comparison of immediate memory for distance traversed and time spent in real human mazes versus VR versions of the same mazes. The first series of experiments tested the effects of goals varying in urgency and desirability on memory immediate memory for distance and time in real and virtual straight paths and paths with multiple turns. The results show reliable effects of goals on memory for distance and time. Moreover, the studies discount the influence of arousal and mood as an explanation for these effects, and suggest that goals may mediate attention to the environment. The second series of experiments investigated the role of attention in memory for distance and time in VR and in mentally simulated environments using verbal, visual, and auditory cues. The results of these studies show some evidence that attention in one’s environment influences memory for that environment. Overall, the results reveal that both goals and deployment of attention affect the representations people construct of their environments (cognitive maps) and subsequent recall. Implications are discussed more broadly with regard to research in spatial cognition

    Assessing the impact of emotion in dual pathway models of sensory processing.

    Get PDF
    In our daily environment, we are constantly encountering an endless stream of information which we must be able to sort and prioritize. Some of the features that influence this are the emotional nature of stimuli and the emotional context of events. Emotional information is often given preferential access to neurocognitive resources, including within sensory processing systems. Interestingly, both auditory and visual systems are divided into dual processing streams; a ventral object identity/perception stream and a dorsal object location/action stream. While effects of emotion on the ventral streams are relatively well defined, its effect on dorsal stream processes remains unclear. The present thesis aimed to investigate the impact of emotion on sensory systems within a dual pathway framework of sensory processing. Study I investigated the role of emotion during auditory localization. While undergoing fMRI, participants indicated the location of an emotional or non-emotional sound within an auditory virtual environment. This revealed that the neurocognitive structures displaying activation modulated by emotion were not the same as those modulated by sound location. Emotion was represented in regions associated with the putative auditory ‘what’ but not ‘where’ stream. Study II examined the impact of emotion on ostensibly similar localization behaviours mediated differentially by the dorsal versus ventral visual processing stream. Ventrally-mediated behaviours were demonstrated to be impacted by the emotional context of a trial, while dorsally-mediated behaviours were not. For Study III, a motion-aftereffect paradigm was used to investigate the impact of emotion on visual area V5/MT+. This area, traditionally believed to be involved in dorsal stream processing, has a number of characteristics similar to a ventral stream structure. It was discovered that V5/MT+ activity was modulated both by presence of perceptual motion and emotional content of an image. In addition, this region displayed patterns of functional connectivity with the amygdala that were significantly modulated by emotion. Together, these results suggest that emotional information modulates neural processing within ventral sensory processing streams, but not dorsal processing streams. These findings are discussed with respect to current models of emotional and sensory processing, including amygdala connections to sensory cortices and emotional effects on cognition and behaviour

    Eye Tracking in the Wild: the Good, the Bad and the Ugly

    Get PDF
    Modelling human cognition and behaviour in rich naturalistic settings and under conditions of free movement of the head and body is a major goal of visual science. Eye tracking has turned out to be an excellent physiological means to investigate how we visually interact with complex 3D environments, real and virtual. This review begins with a philosophical look at the advantages (the Good) and the disadvantages (the Bad) in approaches with different levels of ecological naturalness (traditional tightly controlled laboratory tasks, low- and high-fidelity simulators, fully naturalistic real-world studies). We then discuss in more technical terms the differences in approach required “in the wild”, compared to “received” lab-based methods. We highlight how the unreflecting application of lab-based analysis methods, terminology, and tacit assumptions can lead to poor experimental design or even spurious results (the Ugly). The aim is not to present a “cookbook” of best practices, but to raise awareness of some of the special concerns that naturalistic research brings about. References to helpful literature are provided along the way. The aim is to provide an overview of the landscape from the point of view of a researcher planning serious basic research on the human mind and behaviou

    Object-based attentional expectancies in virtual reality

    Get PDF
    Modern virtual reality (VR) technology has the promise to enable neuroscientists and psychologists to conduct ecologically valid experiments, while maintaining precise experimental control. However, in recent studies, game engines like Unreal Engine or Unity, are used for stimulus creation and data collection. Yet game engines do not provide the underlying architecture to measure the time of stimulus events and behavioral input with the accuracy or precision required by many experiments. Furthermore, it is currently not well understood, if VR and the underlying technology engages the same cognitive processes as a comparable real-world situation. Similarly, not much is known, if experimental findings obtained in a standard monitor-based experiment, are comparable to those obtained in VR by using a head-mounted display (HMD) or if the different stimulus devices also engage different cognitive processes. The aim of my thesis was to investigate if modern HMDs affect the early processing of basic visual features differently than a standard computer monitor. In the first project (chapter 1), I developed a new behavioral paradigm, to investigate how prediction errors of basic object features are processed. In a series of four experiments, the results consistently indicated that simultaneous prediction errors for unexpected colors and orientations are processed independently on an early level of processing, before object binding comes into play. My second project (chapter 2) examined the accuracy and precision of stimulus timing and reaction time measurements, when using Unreal Engine 4 (UE4) in combination with a modern HMD system. My results demonstrate that stimulus durations can be defined and controlled with high precision and accuracy. However, reaction time measurements turned out to be highly imprecise and inaccurate, when using UE4’s standard application programming interface (API). Instead, I proposed a new software-based approach to circumvent these limitations. Timings benchmarks confirmed that the method can measure reaction times with a precision and accuracy in the millisecond range. In the third project (chapter 3), I directly compared the task performance in the paradigm developed in chapter 1 between the original experimental setup and a virtual reality simulation of this experiment. To establish two identical experimental setups, I recreated the entire physical environment in which the experiments took place within VR and blended the virtual replica over the physical lab. As a result, the virtual environment (VE) corresponded not only visually with the physical laboratory but also provided accurate sensory properties of other modalities, such as haptic or acoustic feedback. The results showed a comparable task performance in both the non-VR and the VR experiments, suggesting that modern HMDs do not affect early processing of basic visual features differently than a typical computer monitor

    Understanding spatial, semantic and temporal influences on audiovisual distance compression in virtual environments

    Get PDF
    Perception of distance in virtual reality (VR) is compressed; that is, objects and the distance between them and the observer are consistently perceived as closer than intended by the designers of the VR environment. Although well documented, this phenomenon is still not fully understood or defined with respect to the factors influencing such compression. Studies on distance compression typically factor auditory or visual stimuli individually, critically neglecting to study how such stimuli may interact. They also tend to focus on simple static environments involving simple objects that don't move. VR can be -- and at its best should be -- a multisensory experience, involving not only vision but also auditory and potentially other senses. We report a study encompassing 2 experiments exploring spatial, semantic, and temporal factors of congruency in environments -- environments where visual and audio cues do not correlate one-to-one as they would in a physical environment -- on distance compression. Results suggest no impact of semantic association, yet significant effects for temporal and spatial congruence. We discuss the impact of our findings on virtual environment design and implementation
    • 

    corecore