125 research outputs found

    Sustained directional biases in motion transparency.

    Get PDF
    In motion transparency, one surface is very often seen on top of the other in spite of no proper depth cue in the display. We investigated the dynamics of depth assignment in motion transparency stimuli composed of random dots moving in opposite directions. Similarly to other bistable percepts, which surface is seen in front is arbitrary and changes over time. In addition, we found that helping the segregation of the two surfaces by giving the same color to all dots of one surface significantly slowed down the initial rate of depth reversals. We also measured preferences to see one particular motion direction in front. Unexpectedly, we found that all of our 34 observers had a strong bias to see a particular motion direction in front, and this preferred direction was usually either downward or rightward. In contrast, there was no consistency in seeing the fastest or slowest surface in front. Finally, the preferred motion direction seen in front for one observer was very stable across several days, suggesting that a trace of this arbitrary motion preference is kept in memory

    Genetic Algorithm for Line Labeling of Diagrams Having Drawing Cues

    Full text link

    Effects on orientation perception of manipulating the spatio–temporal prior probability of stimuli

    Get PDF
    Spatial and temporal regularities commonly exist in natural visual scenes. The knowledge of the probability structure of these regularities is likely to be informative for an efficient visual system. Here we explored how manipulating the spatio–temporal prior probability of stimuli affects human orientation perception. Stimulus sequences comprised four collinear bars (predictors) which appeared successively towards the foveal region, followed by a target bar with the same or different orientation. Subjects' orientation perception of the foveal target was biased towards the orientation of the predictors when presented in a highly ordered and predictable sequence. The discrimination thresholds were significantly elevated in proportion to increasing prior probabilities of the predictors. Breaking this sequence, by randomising presentation order or presentation duration, decreased the thresholds. These psychophysical observations are consistent with a Bayesian model, suggesting that a predictable spatio–temporal stimulus structure and an increased probability of collinear trials are associated with the increasing prior expectation of collinear events. Our results suggest that statistical spatio–temporal stimulus regularities are effectively integrated by human visual cortex over a range of spatial and temporal positions, thereby systematically affecting perception

    Interaction of perceptual grouping and crossmodal temporal capture in tactile apparent-motion

    Get PDF
    Previous studies have shown that in tasks requiring participants to report the direction of apparent motion, task-irrelevant mono-beeps can "capture'' visual motion perception when the beeps occur temporally close to the visual stimuli. However, the contributions of the relative timing of multimodal events and the event structure, modulating uni- and/or crossmodal perceptual grouping, remain unclear. To examine this question and extend the investigation to the tactile modality, the current experiments presented tactile two-tap apparent-motion streams, with an SOA of 400 ms between successive, left-/right-hand middle-finger taps, accompanied by task-irrelevant, non-spatial auditory stimuli. The streams were shown for 90 seconds, and participants' task was to continuously report the perceived (left-or rightward) direction of tactile motion. In Experiment 1, each tactile stimulus was paired with an auditory beep, though odd-numbered taps were paired with an asynchronous beep, with audiotactile SOAs ranging from -75 ms to 75 ms. Perceived direction of tactile motion varied systematically with audiotactile SOA, indicative of a temporal-capture effect. In Experiment 2, two audiotactile SOAs-one short (75 ms), one long (325 ms)-were compared. The long-SOA condition preserved the crossmodal event structure (so the temporal-capture dynamics should have been similar to that in Experiment 1), but both beeps now occurred temporally close to the taps on one side (even-numbered taps). The two SOAs were found to produce opposite modulations of apparent motion, indicative of an influence of crossmodal grouping. In Experiment 3, only odd-numbered, but not even-numbered, taps were paired with auditory beeps. This abolished the temporal-capture effect and, instead, a dominant percept of apparent motion from the audiotactile side to the tactile-only side was observed independently of the SOA variation. These findings suggest that asymmetric crossmodal grouping leads to an attentional modulation of apparent motion, which inhibits crossmodal temporal-capture effects

    Perception of Shadows in Children with Autism Spectrum Disorders

    Get PDF
    Background: Cast shadows in visual scenes can have profound effects on visual perception. Much as they are informative, they also constitute noise as they are salient features of the visual scene potentially interfering with the processing of other features. Here we asked i) whether individuals with autism can exploit the information conveyed by cast shadows; ii) whether they are especially sensitive to noise aspects of shadows. Methodology/Principal Findings: Twenty high-functioning children with autism and twenty typically developing children were asked to recognize familiar objects while the presence, position, and shape of the cast shadow were systematically manipulated. Analysis of vocal reaction time revealed that whereas typically developing children used information from cast shadows to improve object recognition, in autistic children the presence of cast shadows—either congruent or incongruent—interfered with object recognition. Critically, vocal reaction times were faster when the object was presented without a cast shadow. Conclusions/Significance: We conclude that shadow-processing mechanisms are abnormal in autism. As a result, processing shadows becomes costly and cast shadows interfere rather than help object recognition

    Effect of Pictorial Depth Cues, Binocular Disparity Cues and Motion Parallax Depth Cues on Lightness Perception in Three-Dimensional Virtual Scenes

    Get PDF
    Surface lightness perception is affected by scene interpretation. There is some experimental evidence that perceived lightness under bi-ocular viewing conditions is different from perceived lightness in actual scenes but there are also reports that viewing conditions have little or no effect on perceived color. We investigated how mixes of depth cues affect perception of lightness in three-dimensional rendered scenes containing strong gradients of illumination in depth.Observers viewed a virtual room (4 m width x 5 m height x 17.5 m depth) with checkerboard walls and floor. In four conditions, the room was presented with or without binocular disparity (BD) depth cues and with or without motion parallax (MP) depth cues. In all conditions, observers were asked to adjust the luminance of a comparison surface to match the lightness of test surfaces placed at seven different depths (8.5-17.5 m) in the scene. We estimated lightness versus depth profiles in all four depth cue conditions. Even when observers had only pictorial depth cues (no MP, no BD), they partially but significantly discounted the illumination gradient in judging lightness. Adding either MP or BD led to significantly greater discounting and both cues together produced the greatest discounting. The effects of MP and BD were approximately additive. BD had greater influence at near distances than far.These results suggest the surface lightness perception is modulated by three-dimensional perception/interpretation using pictorial, binocular-disparity, and motion-parallax cues additively. We propose a two-stage (2D and 3D) processing model for lightness perception

    Learning to Use Illumination Gradients as an Unambiguous Cue to Three Dimensional Shape

    Get PDF
    The luminance and colour gradients across an image are the result of complex interactions between object shape, material and illumination. Using such variations to infer object shape or surface colour is therefore a difficult problem for the visual system. We know that changes to the shape of an object can affect its perceived colour, and that shading gradients confer a sense of shape. Here we investigate if the visual system is able to effectively utilise these gradients as a cue to shape perception, even when additional cues are not available. We tested shape perception of a folded card object that contained illumination gradients in the form of shading and more subtle effects such as inter-reflections. Our results suggest that observers are able to use the gradients to make consistent shape judgements. In order to do this, observers must be given the opportunity to learn suitable assumptions about the lighting and scene. Using a variety of different training conditions, we demonstrate that learning can occur quickly and requires only coarse information. We also establish that learning does not deliver a trivial mapping between gradient and shape; rather learning leads to the acquisition of assumptions about lighting and scene parameters that subsequently allow for gradients to be used as a shape cue. The perceived shape is shown to be consistent for convex and concave versions of the object that exhibit very different shading, and also similar to that delivered by outline, a largely unrelated cue to shape. Overall our results indicate that, although gradients are less reliable than some other cues, the relationship between gradients and shape can be quickly assessed and the gradients therefore used effectively as a visual shape cue
    corecore