2 research outputs found

    The Scaling Of Outdoor Space In Tilted Observers

    No full text
    Perceptual matches between vertical extents (poles) and egocentric extents (between the observer and the pole) show that observers set themselves much too far from the pole, consistent with an under-perception of egocentric distance (Li, Phillips & Durgin, 2010). These matches can be modeled by assuming that angular gaze declination is exaggerated with a pitch-angle gain of 1.5. Matches to horizontal frontal extents suggest a lesser yaw-angle gain of about 1.2 (Li et al., in press). We tested whether angular biases in space perception were affected by observer orientation relative to vertical. Observers (96) were tested in four, between-subject conditions: (1) Walking, (2) Sitting upright on a cart, (3) Lying sideways on a cart (tilted 90° from vertical), and (4) Lying at an oblique angle (54° from vertical) on a cart. Each observer made three judgments: one egocentric matching judgment to a 10 m vertical pole (half started near; half far and adjusted themselves to be in the apparent match location), one 45° gaze elevation judgment to a 35 m tower (half started near; half far and adjusted themselves to be at an apparent 45° to the top of the tower), and one verbal height estimate of the 35 m tower. Upright observers and tilted observers showed similarly biased matches between egocentric distance and object height and consistently biased apparent 45° gaze setting, consistent with the model proposed by Li et al. (2010). This suggests that exaggerations of gaze elevation and declination are referenced to the world rather than the body. However, tilted observers gave reliably lower verbal estimates of tower height (geometric mean: 28 m), than did upright observers (45 m). Although eye-height was similar across conditions, it may have been underestimated in the tilted conditions -- which should reduce height estimates proportionally but not affect matching. Meeting abstract presented at VSS 2013

    Frontal Extents Are Compressed In Virtual Reality

    No full text
    Action measures reflect the calibrated relationship between perception and action (Powers, 1973). There is evidence that egocentric distances are underestimated in normal environments even though people walk them accurately. One basis for this claim is that when people are asked to match a frontal extent with an egocentric one, they set the egocentric interval much too large. Li, Phillips and Durgin, (2011) conducted such matching experiments in both (panoramic) virtual (VR) and real outdoor environments. Similar matching errors were found in both environments, as if egocentric distances appeared compressed relative to frontal ones. In the present study we compared action measures (visually-directed walking) for egocentric and frontal intervals in VR and in an outdoor environment. Walking estimates of frontal distances were relatively accurate in VR, but walking estimates of egocentric distances were short. Geuss et al. (2011) have interpreted such a pattern of data as indicating that egocentric distances, but not frontal extents, are compressed in VR. However, the ratios of walking in the two conditions exactly correspond to the matched ratios found in the matching task both in VR and in an outdoor environment. Moreover, we found that walking measures overestimate frontal extents in outdoor environments (see also Philbeck et al., 2004). It seems that frontal intervals and egocentric intervals are both compressed in VR. Frontal intervals may be matched relatively accurately in VR by walking measures because the compression of VR approximately offsets the errors that are normally observed in real environments. Walking actions are calibrated during normal use, but walking is normally used to cover egocentric distances, not frontal ones. Because frontal intervals appear larger than egocentric intervals, it should be expected that walking out frontal intervals will produce proportionally greater estimates than walking out egocentric intervals even in VR. Meeting abstract presented at VSS 2012
    corecore