798 research outputs found

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Effects of stereoscopic disparity on early ERP components during classification of three-dimensional objects

    Get PDF
    This study investigates the effects of stereo disparity on the perception of three-dimensional (3D) object shape. We tested the hypothesis that stereo input modulates the brain activity related to perceptual analyses of 3D shape configuration during image classification. High-density (256-channel) electroencephalogram (EEG) was used to record the temporal dynamics of visual shape processing under conditions of two-dimensional (2D) and 3D visual presentation. On each trial, observers made image classification judgements ('Same'/'Different') to two briefly presented, multi-part, novel objects. On different-object trials, stimuli could either share volumetric parts but not the global 3D shape configuration and have different parts but the same global 3D shape configuration or differ on both aspects. Analyses using mass univariate contrasts showed that the earliest sensitivity to 2D versus 3D viewing appeared as a negative deflection over posterior locations on the N1 component between 160 and 220 ms post-stimulus onset. Subsequently, event-related potential (ERP) modulations during the N2 time window between 240 and 370 ms were linked to image classification. N2 activity reflected two distinct components - an early N2 (240-290 ms) and a late N2 (290-370 ms) - that showed different patterns of responses to 2D and 3D input and differential sensitivity to 3D object structure. The results revealed that stereo input modulates the neural correlates of 3D object shape. We suggest that this reflects differential perceptual processing of object shape under conditions of stereo or mono input. These findings challenge current theories that attribute no functional role for stereo input during 3D shape perception

    Mental simulation for grounding object cognition

    Get PDF
    Grounded (embodied) theories of cognition propose that memory, including knowledge and meaning, is grounded in sensorimotor and mental state processes. The main proposed mechanism for how memory is grounded is mental simulation. Simulation occurs when neural activity in modal association cortex triggers time-locked, recurrent and feedback activity across multiple lower-level modal processing areas from which the memory was initially constructed. Through this distributed multi-regional activity, seeing an object or reading its name (e.g., “dog”) re-enacts associated features that were stored during earlier learning experiences (e.g. its shape, color, motion, actions with it), thereby constructing cognition, memory, and meaning. This paper reviews convergent evidence from cognitive neuroscience of mental imagery, object cognition, and memory that supports a multi-state interactive (MUSI) account of automatic and strategic mental simulation mechanisms that can ground memory, including the meaning, of objects in modal processing of visual features

    Experimental Evidence for Top-Down Attentional Selection in the Selective Tuning Model of Visual Attention

    Get PDF
    To overcome limited processing capacity, our visual system facilitates information that relates to the task at hand while inhibiting irrelevant information via selective attention. Among various attention models and theories, the Selective Tuning model of visual attention (ST) is a computation model of visual processing that is based on biological mechanisms. This model emphasizes the role of top-down feedback processing in visual perception and has predicted its unique consequences, such as an attentional surround suppression in which the attentional focus is accompanied by an inhibitory surround. The previous studies have experimentally validated STs predictions, indicating that the components in ST do reflect actual visual processing in the brain. Nevertheless, many aspects of ST still need to be elaborated and several predictions and assumptions remain untested. The series of works in this dissertation investigate different aspects of top-down feedback processing in visual perception that ST has proposed to corroborate this model and to broaden our understanding of visual attention. The first study examined whether top-down feedback processing is necessary for an attention-demanding, fine-grained visual localization (Chapter 2). The subsequent two studies focused on the properties of different types of the attentional surround suppression, the end-result of top-down feedback processing. The second study suggested the interplay between the location-based and feature-based surround suppression and tested the potential factors that could manipulate the spatial extent of the location-based suppressive surround (Chapter 3). The last study demonstrated feature-based surround suppression in motion processing and its neurophysiological mechanism (Chapter 4). Collectively, this work reinforces functional significance of top-down, attention-mediated feedback for visual processing and supports the validity of ST as well

    Impairment of perceptual closure in autism for vertex- but not edge-defined object images

    Get PDF
    One of the characteristics of autism spectrum disorder (ASD) is atypical sensory processing and perceptual integration. Here, we used an object naming task to test the significance of deletion of vertices versus extended contours (edges) in naming fragmented line drawings of natural objects in typically developing and ASD children. The basic components of a fragmented image in perceptual closure need to be integrated to make a coherent visual perception. When vertices were missing and only edges were visible, typically developing and ASD subjects performed similarly. But typically developing children performed significantly better than ASD children when only vertex information was visible. These results indicate impairment of binding vertices but not edges to form a holistic representation of an object in children with ASD

    Memory Influences Visual Cognition across Multiple Functional States of Interactive Cortical Dynamics

    Get PDF
    No embargo requiredMemory supports a wide range of abilities from categorical perception to goal-directed behavior, such as decision-making and episodic recognition. Memory activates fast and surprisingly accurately and even when information is ambiguous or impoverished (i.e., showing object constancy). This paper proposes the multiple-state interactive (MUSI) account of object cognition that attempts to explain how sensory stimulation activates memory across multiple functional states of neural dynamics, including automatic and strategic mental simulation mechanisms that can ground cognition in modal information processing. A key novel postulate of this account is ‘multiple-function regional activity’: The same neuronal population can contribute to multiple brain states, depending upon the dominant set of inputs at that time. In state 1, the initial fast bottom-up pass through posterior neocortex happens between 95 ms and ~200 ms, with knowledge supporting categorical perception by 120 ms. In state 2, starting around 200 ms, a sustained state of iterative activation of object-sensitive cortex involves bottom-up, recurrent, and feedback interactions with frontoparietal cortex. This supports higher cognitive functions associated with decision-making even under ambiguous or impoverished conditions, phenomenological consciousness, and automatic mental simulation. In the latest state so far identified, state M, starting around 300 to 500 ms, large-scale cortical network interactions, including between multiple networks (e.g., control, salience, and especially default mode), further modulate posterior cortex. This supports elaborated cognition based on earlier processing, including episodic memory, strategic mental simulation, decision evaluation, creativity, and access consciousness. Convergent evidence is reviewed from cognitive neuroscience of object cognition, decision-making, memory, and mental imagery that support this account and define the brain regions and time course of these brain dynamics

    The hippocampus and cerebellum in adaptively timed learning, recognition, and movement

    Full text link
    The concepts of declarative memory and procedural memory have been used to distinguish two basic types of learning. A neural network model suggests how such memory processes work together as recognition learning, reinforcement learning, and sensory-motor learning take place during adaptive behaviors. To coordinate these processes, the hippocampal formation and cerebellum each contain circuits that learn to adaptively time their outputs. Within the model, hippocampal timing helps to maintain attention on motivationally salient goal objects during variable task-related delays, and cerebellar timing controls the release of conditioned responses. This property is part of the model's description of how cognitive-emotional interactions focus attention on motivationally valued cues, and how this process breaks down due to hippocampal ablation. The model suggests that the hippocampal mechanisms that help to rapidly draw attention to salient cues could prematurely release motor commands were not the release of these commands adaptively timed by the cerebellum. The model hippocampal system modulates cortical recognition learning without actually encoding the representational information that the cortex encodes. These properties avoid the difficulties faced by several models that propose a direct hippocampal role in recognition learning. Learning within the model hippocampal system controls adaptive timing and spatial orientation. Model properties hereby clarify how hippocampal ablations cause amnesic symptoms and difficulties with tasks which combine task delays, novelty detection, and attention towards goal objects amid distractions. When these model recognition, reinforcement, sensory-motor, and timing processes work together, they suggest how the brain can accomplish conditioning of multiple sensory events to delayed rewards, as during serial compound conditioning.Air Force Office of Scientific Research (F49620-92-J-0225, F49620-86-C-0037, 90-0128); Advanced Research Projects Agency (ONR N00014-92-J-4015); Office of Naval Research (N00014-91-J-4100, N00014-92-J-1309, N00014-92-J-1904); National Institute of Mental Health (MH-42900
    corecore