4 research outputs found

    Representational Biases In The Perception Of Visuospatial Orientation: Gravitational And Other Reference Frames

    No full text
    Two very different bias functions may be observed in the study of perceived 2D orientation of visual stimuli. One bias function is symmetrical between vertical and horizontal. This “cardinal” bias (e.g., Gischick et al. 2011) has sometimes been argued to be a front-end coding bias due to the overrepresentation of vertical and horizontal in natural scenes and in visual cortex. The second, “categorical” bias function (Dick & Hochstein, 1989) is asymmetrical; it exaggerates angular deviations from horizontal while underestimating deviations from vertical with the consequence that orientations that are only 37-40° from horizontal are judged to bisect the angular distance between horizontal and vertical (Durgin & Li, 2011). Here we report that both of these biases appear to be yoked to the perceived gravitational reference frame: When observers are positioned at an attitude of 45° so that retinal and gravitational reference frames are dissociated, it is the perceived gravitational reference frame that matters most. In the case of the categorical bias, the gravitational horizontal seems to provide a particularly strong reference orientation, consistent with the importance of the gravitational ground plane. In the case of the cardinal bias, which is measured by comparison between two orientations, the gravitational reference-frame dominance is clearly inconsistent with an account based on cortical over-representation of cardinal orientations. More likely, for purposes of comparison, orientations are coded relative to a perceptually-given reference frame that is divided into quadrants. Cardinal bias might therefore be understood as resulting from representational range compression within each perceptual quadrant. In a series of five experiments we show that noisy orientation textures composed of variably-oriented gabor patches can show both kinds of cognitive orientation bias, and that both biases are yoked to perceptual rather than retinal reference frames. Meeting abstract presented at VSS 2015

    Angular Expansion Theory Turned On Its Side

    No full text
    When standing, egocentric distance can be specified angularly by direction of gaze to the point of ground contact (Wallach & O\u27Leary, 1982). Estimates of egocentric distance show underestimation by 0.7, consistent with an observed overestimation of gaze declination by 1.5 (Durgin & Li, 2011). Moreover, perceptual matching of ground distances to pole heights can be perfectly modeled by a 1.5 expansion of perceived angular declination relative to the horizontal (Li et al., 2011). In azimuth, extent matching corresponds to an angular expansion of about 1.2 (Li et al., 2013). Are these angular biases associated with the coding of gaze position in the head or with the reference frame of the horizontal ground plane? We tested this question in an open field using people as targets by comparing perceptual matching by upright observers and by observers suspended on their sides at eye level. Participants instructed one experimenter to move left or right so as to create a frontal distance from a second experimenter equal to the participant\u27s egocentric distance to the second experimenter. Implicitly, the task is to create a 45째 azimuthal angle. Would matches made by observers on their side show an angular gain of 1.5, consistent with their bodily orientation, or would they show the more typical azimuthal gain of 1.2? A total of 35 participants (18 sideways) matched egocentric distances of 7 to 16 m and made verbal estimates of a 35 m egocentric extent and a 25 m frontal extent 35 m away. In fact, participants on their side showed twice the angular bias as upright participants -- both in their extent matches and in their verbal estimates of distances. The sideways verbal estimates implied an angular expansion by 1.4. These angular distortions do not seem to affect shape perception, but only the estimation of extents between objects. Meeting abstract presented at VSS 2014

    Frontal Extents Are Compressed In Virtual Reality

    No full text
    Action measures reflect the calibrated relationship between perception and action (Powers, 1973). There is evidence that egocentric distances are underestimated in normal environments even though people walk them accurately. One basis for this claim is that when people are asked to match a frontal extent with an egocentric one, they set the egocentric interval much too large. Li, Phillips and Durgin, (2011) conducted such matching experiments in both (panoramic) virtual (VR) and real outdoor environments. Similar matching errors were found in both environments, as if egocentric distances appeared compressed relative to frontal ones. In the present study we compared action measures (visually-directed walking) for egocentric and frontal intervals in VR and in an outdoor environment. Walking estimates of frontal distances were relatively accurate in VR, but walking estimates of egocentric distances were short. Geuss et al. (2011) have interpreted such a pattern of data as indicating that egocentric distances, but not frontal extents, are compressed in VR. However, the ratios of walking in the two conditions exactly correspond to the matched ratios found in the matching task both in VR and in an outdoor environment. Moreover, we found that walking measures overestimate frontal extents in outdoor environments (see also Philbeck et al., 2004). It seems that frontal intervals and egocentric intervals are both compressed in VR. Frontal intervals may be matched relatively accurately in VR by walking measures because the compression of VR approximately offsets the errors that are normally observed in real environments. Walking actions are calibrated during normal use, but walking is normally used to cover egocentric distances, not frontal ones. Because frontal intervals appear larger than egocentric intervals, it should be expected that walking out frontal intervals will produce proportionally greater estimates than walking out egocentric intervals even in VR. Meeting abstract presented at VSS 2012

    The Scaling Of Outdoor Space In Tilted Observers

    No full text
    Perceptual matches between vertical extents (poles) and egocentric extents (between the observer and the pole) show that observers set themselves much too far from the pole, consistent with an under-perception of egocentric distance (Li, Phillips & Durgin, 2010). These matches can be modeled by assuming that angular gaze declination is exaggerated with a pitch-angle gain of 1.5. Matches to horizontal frontal extents suggest a lesser yaw-angle gain of about 1.2 (Li et al., in press). We tested whether angular biases in space perception were affected by observer orientation relative to vertical. Observers (96) were tested in four, between-subject conditions: (1) Walking, (2) Sitting upright on a cart, (3) Lying sideways on a cart (tilted 90° from vertical), and (4) Lying at an oblique angle (54° from vertical) on a cart. Each observer made three judgments: one egocentric matching judgment to a 10 m vertical pole (half started near; half far and adjusted themselves to be in the apparent match location), one 45° gaze elevation judgment to a 35 m tower (half started near; half far and adjusted themselves to be at an apparent 45° to the top of the tower), and one verbal height estimate of the 35 m tower. Upright observers and tilted observers showed similarly biased matches between egocentric distance and object height and consistently biased apparent 45° gaze setting, consistent with the model proposed by Li et al. (2010). This suggests that exaggerations of gaze elevation and declination are referenced to the world rather than the body. However, tilted observers gave reliably lower verbal estimates of tower height (geometric mean: 28 m), than did upright observers (45 m). Although eye-height was similar across conditions, it may have been underestimated in the tilted conditions -- which should reduce height estimates proportionally but not affect matching. Meeting abstract presented at VSS 2013
    corecore