135,277 research outputs found

    Cortical Dynamics of Contextually-Cued Attentive Visual Learning and Search: Spatial and Object Evidence Accumulation

    Full text link
    How do humans use predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, a certain combination of objects can define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. A neural model, ARTSCENE Search, is developed to illustrate the neural mechanisms of such memory-based contextual learning and guidance, and to explain challenging behavioral data on positive/negative, spatial/object, and local/distant global cueing effects during visual search. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined by enhancing target-like objects in space as a scene is scanned with saccadic eye movements. The model clarifies the functional roles of neuroanatomical, neurophysiological, and neuroimaging data in visual search for a desired goal object. In particular, the model simulates the interactive dynamics of spatial and object contextual cueing in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortical cells (area 46) prime possible target locations in posterior parietal cortex based on goalmodulated percepts of spatial scene gist represented in parahippocampal cortex, whereas model ventral prefrontal cortical cells (area 47/12) prime possible target object representations in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex. The model hereby predicts how the cortical What and Where streams cooperate during scene perception, learning, and memory to accumulate evidence over time to drive efficient visual search of familiar scenes.CELEST, an NSF Science of Learning Center (SBE-0354378); SyNAPSE program of Defense Advanced Research Projects Agency (HR0011-09-3-0001, HR0011-09-C-0011

    The reentry hypothesis: The putative interaction of the frontal eye field, ventrolateral prefrontal cortex, and areas V4, IT for attention and eye movement

    Get PDF
    Attention is known to play a key role in perception, including action selection, object recognition and memory. Despite findings revealing competitive interactions among cell populations, attention remains difficult to explain. The central purpose of this paper is to link up a large number of findings in a single computational approach. Our simulation results suggest that attention can be well explained on a network level involving many areas of the brain. We argue that attention is an emergent phenomenon that arises from reentry and competitive interactions. We hypothesize that guided visual search requires the usage of an object-specific template in prefrontal cortex to sensitize V4 and IT cells whose preferred stimuli match the target template. This induces a feature-specific bias and provides guidance for eye movements. Prior to an eye movement, a spatially organized reentry from occulomotor centers, specifically the movement cells of the frontal eye field, occurs and modulates the gain of V4 and IT cells. The processes involved are elucidated by quantitatively comparing the time course of simulated neural activity with experimental data. Using visual search tasks as an example, we provide clear and empirically testable predictions for the participation of IT, V4 and the frontal eye field in attention. Finally, we explain a possible physiological mechanism that can lead to non-flat search slopes as the result of a slow, parallel discrimination process

    When are abrupt onsets found efficiently in complex visual search? : evidence from multi-element asynchronous dynamic search

    Get PDF
    Previous work has found that search principles derived from simple visual search tasks do not necessarily apply to more complex search tasks. Using a Multielement Asynchronous Dynamic (MAD) visual search task, in which high numbers of stimuli could either be moving, stationary, and/or changing in luminance, Kunar and Watson (M. A Kunar & D. G. Watson, 2011, Visual search in a Multi-element Asynchronous Dynamic (MAD) world, Journal of Experimental Psychology: Human Perception and Performance, Vol 37, pp. 1017-1031) found that, unlike previous work, participants missed a higher number of targets with search for moving items worse than for static items and that there was no benefit for finding targets that showed a luminance onset. In the present research, we investigated why luminance onsets do not capture attention and whether luminance onsets can ever capture attention in MAD search. Experiment 1 investigated whether blinking stimuli, which abruptly offset for 100 ms before reonsetting-conditions known to produce attentional capture in simpler visual search tasks-captured attention in MAD search, and Experiments 2-5 investigated whether giving participants advance knowledge and preexposure to the blinking cues produced efficient search for blinking targets. Experiments 6-9 investigated whether unique luminance onsets, unique motion, or unique stationary items captured attention. The results found that luminance onsets captured attention in MAD search only when they were unique, consistent with a top-down unique feature hypothesis. (PsycINFO Database Record (c) 2013 APA, all rights reserved)

    When are abrupt onsets found efficiently in complex visual search? : evidence from multi-element asynchronous dynamic search

    Get PDF
    Previous work has found that search principles derived from simple visual search tasks do not necessarily apply to more complex search tasks. Using a Multielement Asynchronous Dynamic (MAD) visual search task, in which high numbers of stimuli could either be moving, stationary, and/or changing in luminance, Kunar and Watson (M. A Kunar & D. G. Watson, 2011, Visual search in a Multi-element Asynchronous Dynamic (MAD) world, Journal of Experimental Psychology: Human Perception and Performance, Vol 37, pp. 1017-1031) found that, unlike previous work, participants missed a higher number of targets with search for moving items worse than for static items and that there was no benefit for finding targets that showed a luminance onset. In the present research, we investigated why luminance onsets do not capture attention and whether luminance onsets can ever capture attention in MAD search. Experiment 1 investigated whether blinking stimuli, which abruptly offset for 100 ms before reonsetting-conditions known to produce attentional capture in simpler visual search tasks-captured attention in MAD search, and Experiments 2-5 investigated whether giving participants advance knowledge and preexposure to the blinking cues produced efficient search for blinking targets. Experiments 6-9 investigated whether unique luminance onsets, unique motion, or unique stationary items captured attention. The results found that luminance onsets captured attention in MAD search only when they were unique, consistent with a top-down unique feature hypothesis. (PsycINFO Database Record (c) 2013 APA, all rights reserved)
    • …
    corecore