127 research outputs found

    Sensorimotor adaptation reveals systematic biases in 3D perception

    Get PDF
    Funding: This research was supported by the National Science Foundation under Grant No.2120610.The existence of biases in visual perception and their impact on visually guided actions has long been a fundamental yet unresolved question. Evidence revealing perceptual or visuomotor biases has typically been disregarded because such biases in spatial judgments can often be attributed to experimental measurement confounds. To resolve this controversy, we leveraged the visuomotor system’s adaptation mechanism — triggered only by a discrepancy between visual estimates and sensory feedback — to directly indicate whether systematic errors in perceptual and visuomotor spatial judgments exist. To resolve this controversy, we leveraged the adaptive mechanisms of the visuomotor system to directly reveal whether systematic biases or errors in perceptual and visuomotor spatial judgments exist. In a within-subject study (N=24), participants grasped a virtual 3D object with varying numbers of depth cues (single vs. multiple) while receiving haptic feedback. The resulting visuomotor adaptations and aftereffects demonstrated that the planned grip size, determined by the visually perceived depth of the object, was consistently overestimated. This overestimation intensified when multiple cues were present, despite no actual change in physical depth. These findings conclusively confirm the presence of inherent biases in visual estimates for both perception and action, and highlight the potential use of visuomotor adaptation as a novel tool for understanding perceptual biases.Peer reviewe

    Misperception of rigidity from actively generated optic flow

    Get PDF
    It is conventionally assumed that the goal of the visual system is to derive a perceptual representation that is a veridical reconstruction of the external world: a reconstruction that leads to optimal accuracy and precision of metric estimates, given sensory information. For example, 3-D structure is thought to be veridically recovered from optic flow signals in combination with egocentric motion information and assumptions of the stationarity and rigidity of the external world. This theory predicts veridical perceptual judgments under conditions that mimic natural viewing, while ascribing nonoptimality under laboratory conditions to unreliable or insufficient sensory information\u2014for example, the lack of natural and measurable observer motion. In two experiments, we contrasted this optimal theory with a heuristic theory that predicts the derivation of perceived 3-D structure based on the velocity gradients of the retinal flow field without the use of egomotion signals or a rigidity prior. Observers viewed optic flow patterns generated by their own motions relative to two surfaces and later viewed the same patterns while stationary. When the surfaces were part of a rigid structure, static observers systematically perceived a nonrigid structure, consistent with the predictions of both an optimal and a heuristic model. Contrary to the optimal model, moving observers also perceived nonrigid structures in situations where retinal and extraretinal signals, combined with a rigidity assumption, should have yielded a veridical rigid estimate. The perceptual biases were, however, consistent with a heuristic model which is only based on an analysis of the optic flow

    Recovery of 3-D structure from motion is neither euclidean nor affine.

    Get PDF

    Distortions of depth-order relations and parallelism in structure from motion

    Full text link
    Four experiments related human perception of depth–order relations in structure-from-motion displays to current Euclidean and affine theories of depth recovery from motion. Discrimination between parallel and nonparallel lines and relative-depth judgments was observed for orthographic projections of rigidly oscillating random-dot surfaces. We found that (1) depth–order relations were perceived veridically for surfaces with the same slant magnitudes, but were systematically biased for surfaces with different slant magnitudes. (2) Parallel (virtual) lines defined by probe dots on surfaces with different slant magnitudes were judged to be nonparallel. (3) Relative-depth judgments were internally inconsistent for probe dots on surfaces with different slant magnitudes. It is argued that both veridical performance and systematic misperceptions may be accounted for by a heuristic analysis of the first-order optic flow. Appropriate 2-D motions produce phenomenal impressions of movement in depth (see, e.g., Miles, 1931; Musatti, 1924; Wallach & O’Connell, 1953). Certain types of these phenomena have been named structure from motion (SFM). The questions of how these impressions arise and what type of geometric structure is derived from these motions have led to both experimental and theoretical work on depth recovery from motion. The psychophysical research has evaluated the capabilities of the human visual system in light of the constraints and the scope of the algorithms devised to derive 3-D geometric properties from 2-D motions (for a review, see Braunstein

    Bayesian Modeling of Perceived Surface Slant from Actively-Generated and Passively-Observed Optic Flow

    Get PDF
    We measured perceived depth from the optic flow (a) when showing a stationary physical or virtual object to observers who moved their head at a normal or slower speed, and (b) when simulating the same optic flow on a computer and presenting it to stationary observers. Our results show that perceived surface slant is systematically distorted, for both the active and the passive viewing of physical or virtual surfaces. These distortions are modulated by head translation speed, with perceived slant increasing directly with the local velocity gradient of the optic flow. This empirical result allows us to determine the relative merits of two alternative approaches aimed at explaining perceived surface slant in active vision: an “inverse optics” model that takes head motion information into account, and a probabilistic model that ignores extra-retinal signals. We compare these two approaches within the framework of the Bayesian theory. The “inverse optics” Bayesian model produces veridical slant estimates if the optic flow and the head translation velocity are measured with no error; because of the influence of a “prior” for flatness, the slant estimates become systematically biased as the measurement errors increase. The Bayesian model, which ignores the observer's motion, always produces distorted estimates of surface slant. Interestingly, the predictions of this second model, not those of the first one, are consistent with our empirical findings. The present results suggest that (a) in active vision perceived surface slant may be the product of probabilistic processes which do not guarantee the correct solution, and (b) extra-retinal signals may be mainly used for a better measurement of retinal information

    Perceived Surface Slant Is Systematically Biased in the Actively-Generated Optic Flow

    Get PDF
    Humans make systematic errors in the 3D interpretation of the optic flow in both passive and active vision. These systematic distortions can be predicted by a biologically-inspired model which disregards self-motion information resulting from head movements (Caudek, Fantoni, & Domini 2011). Here, we tested two predictions of this model: (1) A plane that is stationary in an earth-fixed reference frame will be perceived as changing its slant if the movement of the observer's head causes a variation of the optic flow; (2) a surface that rotates in an earth-fixed reference frame will be perceived to be stationary, if the surface rotation is appropriately yoked to the head movement so as to generate a variation of the surface slant but not of the optic flow. Both predictions were corroborated by two experiments in which observers judged the perceived slant of a random-dot planar surface during egomotion. We found qualitatively similar biases for monocular and binocular viewing of the simulated surfaces, although, in principle, the simultaneous presence of disparity and motion cues allows for a veridical recovery of surface slant
    corecore