1,362 research outputs found

    Controlled Interaction: Strategies For Using Virtual Reality To Study Perception

    Get PDF
    Immersive virtual reality systems employing head-mounted displays offer great promise for the investigation of perception and action, but there are well-documented limitations to most virtual reality systems. In the present article, we suggest strategies for studying perception/action interactions that try to depend on both scale-invariant metrics (such as power function exponents) and careful consideration of the requirements of the interactions under investigation. New data concerning the effect of pincushion distortion on the perception of surface orientation are presented, as well as data documenting the perception of dynamic distortions associated with head movements with uncorrected optics. A review of several successful uses of virtual reality to study the interaction of perception and action emphasizes scale-free analysis strategies that can achieve theoretical goals while minimizing assumptions about the accuracy of virtual simulations

    Human Visual Navigation: Effects of Visual Context, Navigation Mode, and Gender

    Get PDF
    Abstract This thesis extends research on human visual path integration using optic flow cues. In three experiments, a large-scale path-completion task was contextualised within highly-textured authentic virtual environments. Real-world navigational experience was further simulated, through the inclusion of a large roundabout on the route. Three semi-surrounding screens provided a wide field of view. Participants were able to perform the task, but directional estimates showed characteristic errors, which can be explained with a model of distance misperception on the outbound roads of the route. Display and route layout parameters had very strong effects on performance. Gender and navigation mode were also influential. Participants consistently underestimated the final turn angle when simulated self-motion was viewed passively, on large projection screens in a driving simulator. Error increased with increasing size of the internal angle, on route layouts based on equilateral or isosceles triangles. A compressed range of responses was found. Higher overall accuracy was observed when a display with smaller desktop computer monitors was used; especially when simulated self-motion was actively controlled with a steering wheel and foot pedals, rather than viewed passively. Patterns and levels of error depended on route layout, which included triangles with non-equivalent lengths of the two outbound roads. A powerful effect on performance was exerted by the length of the "approach segment" on the route: that is, the distance travelled on the first outbound road, combined with the distance travelled between the two outbound roads on the roundabout curve. The final turn angle was generally overestimated on routes with a long approach segment (those with a long first road and a 60° or 90° internal angle), and underestimated on routes with a short approach segment (those with a short first road or the 120° internal angle). Accuracy was higher for active participants on routes with longer approach segments and on 90° angle trials, and for passive participants on routes with shorter approach segments and on 120° angle trials. Active participants treated all internal angles as 90° angles. Participants performed with lower overall accuracy when optic flow information was disrupted, through the intermittent presentation of self-motion on the small-screen display, in a sequence of static snapshots of the route. Performance was particularly impaired on routes with a long approach segment, but quite accurate on those with a short approach segment. Consistent overestimation of the final angle was observed, and error decreased with increasing size of the internal angle. Participants treated all internal angles as 120° angles. The level of available visual information did not greatly affect estimates, in general. The degree of curvature on the roundabout mainly influenced estimates by female participants in the Passive condition. Compared with males, females performed less accurately in the driving simulator, and with reduced optic flow cues; but more accurately with the small-screen display on layouts with a short approach segment, and when they had active control of the self-motion. The virtual environments evoked a sense of presence, but this had no effect on task performance, in general. The environments could be used for training navigational skills where high precision is not required

    The Underestimation Of Egocentric Distance: Evidence From Frontal Matching Tasks

    Get PDF
    There is controversy over the existence, nature, and cause of error in egocentric distance judgments. One proposal is that the systematic biases often found in explicit judgments of egocentric distance along the ground may be related to recently observed biases in the perceived declination of gaze (Durgin & Li, Attention, Perception, & Psychophysics, in press), To measure perceived egocentric distance nonverbally, observers in a field were asked to position themselves so that their distance from one of two experimenters was equal to the frontal distance between the experimenters. Observers placed themselves too far away, consistent with egocentric distance underestimation. A similar experiment was conducted with vertical frontal extents. Both experiments were replicated in panoramic virtual reality. Perceived egocentric distance was quantitatively consistent with angular bias in perceived gaze declination (1.5 gain). Finally, an exocentric distance-matching task was contrasted with a variant of the egocentric matching task. The egocentric matching data approximate a constant compression of perceived egocentric distance with a power function exponent of nearly 1; exocentric matches had an exponent of about 0.67. The divergent pattern between egocentric and exocentric matches suggests that they depend on different visual cues

    Perception and reconstruction of two-dimensional, simulated ego-motion trajectories from optic flow.

    Get PDF
    A veridical percept of ego-motion is normally derived from a combination of visual, vestibular, and proprioceptive signals. In a previous study, blindfolded subjects could accurately perceive passively travelled straight or curved trajectories provided that the orientation of the head remained constant along the trajectory. When they were turned (whole-body, head-fixed) relative to the trajectory, errors occurred. We ask here whether vision allows for better path perception in similar tasks, to correct or complement vestibular perception. Seated, stationary subjects wore a head mounted display showing optic flow stimuli which simulated linear or curvilinear 2D trajectories over a horizontal ground plane. The observer's orientation was either fixed in space, fixed relative to the path, or changed relative to both. After presentation, subjects reproduced the perceived movement with a model vehicle, of which position and orientation were recorded. They tended to correctly perceive ego-rotation (yaw), but they perceived orientation as fixed relative to trajectory or (unlike in the vestibular study) to space. This caused trajectory misperception when body rotation was wrongly attributed to a rotation of the path. Visual perception was very similar to vestibular perception

    Large Perceptual Distortions Of Locomotor Action Space Occur In Ground-Based Coordinates: Angular Expansion And The Large-Scale Horizontal-Vertical Illusion

    Get PDF
    What is the natural reference frame for seeing large-scale spatial scenes in locomotor action space? Prior studies indicate an asymmetric angular expansion in perceived direction in large-scale environments: Angular elevation relative to the horizon is perceptually exaggerated by a factor of 1.5, whereas azimuthal direction is exaggerated by a factor of about 1.25. Here participants made angular and spatial judgments when upright or on their sides to dissociate egocentric from allocentric reference frames. In Experiment 1, it was found that body orientation did not affect the magnitude of the up-down exaggeration of direction, suggesting that the relevant orientation reference frame for this directional bias is allocentric rather than egocentric. In Experiment 2, the comparison of large-scale horizontal and vertical extents was somewhat affected by viewer orientation, but only to the extent necessitated by the classic (5%) horizontal-vertical illusion (HVI) that is known to be retinotopic. Large-scale vertical extents continued to appear much larger than horizontal ground extents when observers lay sideways. When the visual world was reoriented in Experiment 3, the bias remained tied to the ground-based allocentric reference frame. The allocentric HVI is quantitatively consistent with differential angular exaggerations previously measured for elevation and azimuth in locomotor space. (PsycINFO Database Record (c) 2016 APA, all rights reserved

    Human visual-vestibular interaction during curvilinear motion in VR

    Get PDF
    When people move through the world they use retinal image motion to form an estimate of self-motion, although how this is achieved is not yet fully understood. Furthermore, the process is not fool-proof and many conditions create incorrect percepts of self-motion. In particular, experiencing rotation makes it difficult for humans to correctly perceive heading during movement from visual input alone. Here we re-examined human perception of rotation during true and simulated curvilinear motion. Broadly, this was done by simulating movement along various paths using currently available head-mounted stereoscopic virtual reality displays. These displays overcome many of the shortcomings of historical equipment but also represent a significant departure from established setups. As such, the first two experiments, presented together in Chapter 2, replicated experiments from Banks, Ehrlich, Backus, and Crowell (1996) and Crowell, Banks, Shenoy, and Andersen (1998). This was to aid in determining the suitability of consumer-grade virtual reality headsets for self-motion perception research and to provide a point of comparison between this thesis and historical works. Following confirmation that the VR devices were suitable for research we developed a virtual line-based bendable response tool. This tool made use of tracked hand controllers to facilitate fast and intuitive reporting of self-motion perception without confounding heading and rotation perception errors. Using this tool we confirmed reports of curvilinear motion perception during linear travel with rotation. We then measured perception of heading and curvature during travel on linear paths of increasingly eccentric heading, finding that eccentric headings elicited perceptions of curvature. We also measured perceptions of travel along curvilinear paths so that a model of human visual-vestibular interaction \cite{perrone2018visual} as it applies to heading perception could be tested against participant performance. Finally, predictions from this model prompted us to measure curvilinear path perception while using a rotating chair that allowed us to provide a rough congruent vestibular signal. Overall, we found consistent large individual differences in perception across all motion types, although each individual's perceptions were internally consistent. Furthermore, we found that participants perceived curvature while travelling on not only curvilinear paths, but also on linear paths with rotation and linear paths with eccentric headings. The tested model, as implemented, matched participant results well, proving able to explain the majority of the individual differences. Finally, we found that providing a rough congruent vestibular signal resulted in participants perceiving less curved paths compared to when the vestibular signal indicated no rotation

    The worse eye revisited: Evaluating the impact of asymmetric peripheral vision loss on everyday function

    Get PDF
    In instances of asymmetric peripheral vision loss (e.g., glaucoma), binocular performance on simple psychophysical tasks (e.g., static threshold perimetry) is well-predicted by the better seeing eye alone. This suggests that peripheral vision is largely ‘better-eye limited’. In the present study, we examine whether this also holds true for real-world tasks, or whether even a degraded fellow eye contributes important information for tasks of daily living. Twelve normally-sighted adults performed an everyday visually-guided action (finding a mobile phone) in a virtual-reality domestic environment, while levels of peripheral vision loss were independently manipulated in each eye (gaze-contingent blur). The results showed that even when vision in the better eye was held constant, participants were significantly slower to locate the target, and made significantly more head- and eye-movements, as peripheral vision loss in the worse eye increased. A purely unilateral peripheral impairment increased response times by up to 25%, although the effect of bilateral vision loss was much greater (>200%). These findings indicate that even a degraded visual field still contributes important information for performing everyday visually-guided actions. This may have clinical implications for how patients with visual field loss are managed or prioritized, and for our understanding of how binocular information in the periphery is integrated

    Visual Distortions in 360-degree Videos.

    Get PDF
    Omnidirectional (or 360°) images and videos are emergent signals being used in many areas, such as robotics and virtual/augmented reality. In particular, for virtual reality applications, they allow an immersive experience in which the user can interactively navigate through a scene with three degrees of freedom, wearing a head-mounted display. Current approaches for capturing, processing, delivering, and displaying 360° content, however, present many open technical challenges and introduce several types of distortions in the visual signal. Some of the distortions are specific to the nature of 360° images and often differ from those encountered in classical visual communication frameworks. This paper provides a first comprehensive review of the most common visual distortions that alter 360° signals going through the different processing elements of the visual communication pipeline. While their impact on viewers' visual perception and the immersive experience at large is still unknown-thus, it is an open research topic-this review serves the purpose of proposing a taxonomy of the visual distortions that can be encountered in 360° signals. Their underlying causes in the end-to-end 360° content distribution pipeline are identified. This taxonomy is essential as a basis for comparing different processing techniques, such as visual enhancement, encoding, and streaming strategies, and allowing the effective design of new algorithms and applications. It is also a useful resource for the design of psycho-visual studies aiming to characterize human perception of 360° content in interactive and immersive applications
    corecore