1,305 research outputs found

    Blood Oxygen Level-Dependent Activation of the Primary Visual Cortex Predicts Size Adaptation Illusion

    Get PDF
    In natural scenes, objects rarely occur in isolation but appear within a spatiotemporal context. Here, we show that the perceived size of a stimulus is significantly affected by the context of the scene: brief previous presentation of larger or smaller adapting stimuli at the same region of space changes the perceived size of a test stimulus, with larger adapting stimuli causing the test to appear smaller than veridical and vice versa. In ahumanfMRI study, we measured the blood oxygen level-dependent activation (BOLD) responses of the primary visual cortex (V1) to the contours of large-diameter stimuli and found that activation closely matched the perceptual rather than the retinal stimulus size: the activated area of V1 increased or decreased, depending on the size of the preceding stimulus. A model based on local inhibitory V1 mechanisms simulated the inward or outward shifts of the stimulus contours and hence the perceptual effects. Our findings suggest that area V1 is actively involved in reshaping our perception to match the short-term statistics of the visual scene

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    A Neural Model of Surface Perception: Lightness, Anchoring, and Filling-in

    Full text link
    This article develops a neural model of how the visual system processes natural images under variable illumination conditions to generate surface lightness percepts. Previous models have clarified how the brain can compute the relative contrast of images from variably illuminate scenes. How the brain determines an absolute lightness scale that "anchors" percepts of surface lightness to us the full dynamic range of neurons remains an unsolved problem. Lightness anchoring properties include articulation, insulation, configuration, and are effects. The model quantatively simulates these and other lightness data such as discounting the illuminant, the double brilliant illusion, lightness constancy and contrast, Mondrian contrast constancy, and the Craik-O'Brien-Cornsweet illusion. The model also clarifies the functional significance for lightness perception of anatomical and neurophysiological data, including gain control at retinal photoreceptors, and spatioal contrast adaptation at the negative feedback circuit between the inner segment of photoreceptors and interacting horizontal cells. The model retina can hereby adjust its sensitivity to input intensities ranging from dim moonlight to dazzling sunlight. A later model cortical processing stages, boundary representations gate the filling-in of surface lightness via long-range horizontal connections. Variants of this filling-in mechanism run 100-1000 times faster than diffusion mechanisms of previous biological filling-in models, and shows how filling-in can occur at realistic speeds. A new anchoring mechanism called the Blurred-Highest-Luminance-As-White (BHLAW) rule helps simulate how surface lightness becomes sensitive to the spatial scale of objects in a scene. The model is also able to process natural images under variable lighting conditions.Air Force Office of Scientific Research (F49620-01-1-0397); Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-1-0409); Office of Naval Research (N00014-01-1-0624

    Low level constraints on dynamic contour path integration

    Get PDF
    Contour integration is a fundamental visual process. The constraints on integrating discrete contour elements and the associated neural mechanisms have typically been investigated using static contour paths. However, in our dynamic natural environment objects and scenes vary over space and time. With the aim of investigating the parameters affecting spatiotemporal contour path integration, we measured human contrast detection performance of a briefly presented foveal target embedded in dynamic collinear stimulus sequences (comprising five short 'predictor' bars appearing consecutively towards the fovea, followed by the 'target' bar) in four experiments. The data showed that participants' target detection performance was relatively unchanged when individual contour elements were separated by up to 2° spatial gap or 200ms temporal gap. Randomising the luminance contrast or colour of the predictors, on the other hand, had similar detrimental effect on grouping dynamic contour path and subsequent target detection performance. Randomising the orientation of the predictors reduced target detection performance greater than introducing misalignment relative to the contour path. The results suggest that the visual system integrates dynamic path elements to bias target detection even when the continuity of path is disrupted in terms of spatial (2°), temporal (200ms), colour (over 10 colours) and luminance (-25% to 25%) information. We discuss how the findings can be largely reconciled within the functioning of V1 horizontal connections

    Incidental visual processing of spatiotemporal cues in communicative interactions: An fMRI investigation

    Get PDF
    The interpretation of social interactions between people is important in many daily situations. The coordination of the relative body movements between them may provide visual cues that observers use without attention to discriminate such social interactions from the actions of people acting independently of each other. Previous studies highlighted brain regions involved in the visual processing of interacting versus independently acting people, including posterior superior temporal sulcus, and areas of lateral occipitotemporal and parietal cortices. Unlike these previous studies, we focused on the incidental visual processing of social interactions; that is, the processing of the body movements outside the observers’ focus of attention. In the current study, we used functional imaging to measure brain activation while participants were presented with point-light dyads portraying communicative interactions or individual actions. However, their task was to discriminate the brightness of two crosses also on the screen. To investigate brain regions that may process the spatial and temporal relationships between the point-light displays, we either reversed the facingdirection of one agent or spatially scrambled the local motion of the points. Incidental processing of communicative interactions elicited activation in right anterior STS only when the two agents were facing each other. Controlling for differences in local motion by subtracting brain activation to scrambled versions of the point-light displays revealedsignificant activation in parietal cortex for communicative interactions, as well as left amygdala and brain stem/cerebellum. Our results complement previous studies and suggest that additional brain regions may be recruited to incidentally process the spatial and temporal contingencies that distinguish people acting together from people acting individually

    Decoding of EEG signals reveals non-uniformities in the neural geometry of colour

    Get PDF
    The idea of colour opponency maintains that colour vision arises through the comparison of two chromatic mechanisms, red versus green and yellow versus blue. The four unique hues, red, green, blue, and yellow, are assumed to appear at the null points of these the two chromatic systems. Here we hypothesise that, if unique hues represent a tractable cortical state, they should elicit more robust activity compared to other, non-unique hues. We use a spatiotemporal decoding approach to report that electroencephalographic (EEG) responses carry robust information about the tested isoluminant unique hues within a 100-350 ms window from stimulus onset. Decoding is possible in both passive and active viewing tasks, but is compromised when concurrent high luminance contrast is added to the colour signals. For large hue-differences, the efficiency of hue decoding can be predicted by mutual distance in a nominally uniform perceptual colour space. However, for small perceptual neighbourhoods around unique hues, the encoding space shows pivotal non-uniformities which suggest that anisotropies in neurometric hue-spaces may reflect perceptual unique hues

    Cortical Dynamics of Language

    Get PDF
    The human capability for fluent speech profoundly directs inter-personal communication and, by extension, self-expression. Language is lost in millions of people each year due to trauma, stroke, neurodegeneration, and neoplasms with devastating impact to social interaction and quality of life. The following investigations were designed to elucidate the neurobiological foundation of speech production, building towards a universal cognitive model of language in the brain. Understanding the dynamical mechanisms supporting cortical network behavior will significantly advance the understanding of how both focal and disconnection injuries yield neurological deficits, informing the development of therapeutic approaches

    Beyond the classic receptive field: the effect of contextual stimuli

    Get PDF
    Following the pioneering studies of the receptive field (RF), the concept gained further significance for visual perception by the discovery of input effects from beyond the classical RF. These studies demonstrated that neuronal responses could be modulated by stimuli outside their RFs, consistent with the perception of induced brightness, color, orientation, and motion. Lesion scotomata are similarly modulated perceptually from the surround by RFs that have migrated from the interior to the outer edge of the scotoma and in this way provide filling-in of the void. Large RFs are advantageous to this task. In higher visual areas, such as the middle temporal and inferotemporal lobe, RFs increase in size and lose most of their retinotopic organization while encoding increasingly complex features. Whereas lowerlevel RFs mediate perceptual filling-in, contour integration, and figure–ground segregation, RFs at higher levels serve the perception of grouping by common fate, biological motion, and other biologically relevant stimuli, such as faces. Studies in alert monkeys while freely viewing natural scenes showed that classical and nonclassical RFs cooperate in forming representations of the visual world. Today, our understanding of the mechanisms underlying the RF is undergoing a quantum leap. What had started out as a hierarchical feedforward concept for simple stimuli, such as spots, lines, and bars, now refers to mechanisms involving ascending, descending, and lateral signal flow. By extension of the bottom-up paradigm, RFs are nowadays understood as adaptive processors, enabling the predictive coding of complex scenes. Top-down effects guiding attention and tuned to task-relevant information complement the bottom-up analysis
    • 

    corecore