13 research outputs found

    Ambiguity in high definition: Gaze determines physical interpretation of ambiguous rotation even in the absence of a visual context

    Get PDF
    YesPhysical interactions between objects, or between an object and the ground, are amongst the most biologically relevant for live beings. Prior knowledge of Newtonian physics may play a role in disambiguating an object’s movement as well as foveation by increasing the spatial resolution of the visual input. Observers were shown a virtual 3D scene, representing an ambiguously rotating ball translating on the ground. The ball was perceived as rotating congruently with friction, but only when gaze was located at the point of contact. Inverting or even removing the visual context had little influence on congruent judgements compared with the effect of gaze. Counterintuitively, gaze at the point of contact determines the solution of perceptual ambiguity, but independently of visual context. We suggest this constitutes a frugal strategy, by which the brain infers dynamics locally when faced with a foveated input that is ambiguous.J.S. was funded by a College of Life Sciences studentship from the University of Leicester

    Learning to Use Illumination Gradients as an Unambiguous Cue to Three Dimensional Shape

    Get PDF
    The luminance and colour gradients across an image are the result of complex interactions between object shape, material and illumination. Using such variations to infer object shape or surface colour is therefore a difficult problem for the visual system. We know that changes to the shape of an object can affect its perceived colour, and that shading gradients confer a sense of shape. Here we investigate if the visual system is able to effectively utilise these gradients as a cue to shape perception, even when additional cues are not available. We tested shape perception of a folded card object that contained illumination gradients in the form of shading and more subtle effects such as inter-reflections. Our results suggest that observers are able to use the gradients to make consistent shape judgements. In order to do this, observers must be given the opportunity to learn suitable assumptions about the lighting and scene. Using a variety of different training conditions, we demonstrate that learning can occur quickly and requires only coarse information. We also establish that learning does not deliver a trivial mapping between gradient and shape; rather learning leads to the acquisition of assumptions about lighting and scene parameters that subsequently allow for gradients to be used as a shape cue. The perceived shape is shown to be consistent for convex and concave versions of the object that exhibit very different shading, and also similar to that delivered by outline, a largely unrelated cue to shape. Overall our results indicate that, although gradients are less reliable than some other cues, the relationship between gradients and shape can be quickly assessed and the gradients therefore used effectively as a visual shape cue

    Effect of Pictorial Depth Cues, Binocular Disparity Cues and Motion Parallax Depth Cues on Lightness Perception in Three-Dimensional Virtual Scenes

    Get PDF
    Surface lightness perception is affected by scene interpretation. There is some experimental evidence that perceived lightness under bi-ocular viewing conditions is different from perceived lightness in actual scenes but there are also reports that viewing conditions have little or no effect on perceived color. We investigated how mixes of depth cues affect perception of lightness in three-dimensional rendered scenes containing strong gradients of illumination in depth.Observers viewed a virtual room (4 m width x 5 m height x 17.5 m depth) with checkerboard walls and floor. In four conditions, the room was presented with or without binocular disparity (BD) depth cues and with or without motion parallax (MP) depth cues. In all conditions, observers were asked to adjust the luminance of a comparison surface to match the lightness of test surfaces placed at seven different depths (8.5-17.5 m) in the scene. We estimated lightness versus depth profiles in all four depth cue conditions. Even when observers had only pictorial depth cues (no MP, no BD), they partially but significantly discounted the illumination gradient in judging lightness. Adding either MP or BD led to significantly greater discounting and both cues together produced the greatest discounting. The effects of MP and BD were approximately additive. BD had greater influence at near distances than far.These results suggest the surface lightness perception is modulated by three-dimensional perception/interpretation using pictorial, binocular-disparity, and motion-parallax cues additively. We propose a two-stage (2D and 3D) processing model for lightness perception

    Getting depth from flat images

    No full text

    Getting depth from flat images

    No full text

    Reflections on colour constancy

    No full text

    Object perception as Bayesian inference

    No full text
    We perceive the shapes and material properties of objects quickly and reliably despite the complexity and objective ambiguities of natural images. Typical images are highly complex because they consist of many objects embedded in background clutter. Moreover, the image features of an object are extremely variable and ambiguous owing to the effects of projection, occlusion, background clutter, and illumination. The very success of everyday vision implies neural mechanisms, yet to be understood, that discount irrelevant information and organize ambiguous or noisy local image features into objects and surfaces. Recent work in Bayesian theories of visual perception has shown how complexity may be managed and ambiguity resolved through the task-dependent, probabilistic integration of prior object knowledge with image features
    corecore