23,410 research outputs found

    Task-demands can immediately reverse the effects of sensory-driven saliency in complex visual stimuli

    Get PDF
    In natural vision both stimulus features and task-demands affect an observer's attention. However, the relationship between sensory-driven (“bottom-up”) and task-dependent (“top-down”) factors remains controversial: Can task-demands counteract strong sensory signals fully, quickly, and irrespective of bottom-up features? To measure attention under naturalistic conditions, we recorded eye-movements in human observers, while they viewed photographs of outdoor scenes. In the first experiment, smooth modulations of contrast biased the stimuli's sensory-driven saliency towards one side. In free-viewing, observers' eye-positions were immediately biased toward the high-contrast, i.e., high-saliency, side. However, this sensory-driven bias disappeared entirely when observers searched for a bull's-eye target embedded with equal probability to either side of the stimulus. When the target always occurred in the low-contrast side, observers' eye-positions were immediately biased towards this low-saliency side, i.e., the sensory-driven bias reversed. Hence, task-demands do not only override sensory-driven saliency but also actively countermand it. In a second experiment, a 5-Hz flicker replaced the contrast gradient. Whereas the bias was less persistent in free viewing, the overriding and reversal took longer to deploy. Hence, insufficient sensory-driven saliency cannot account for the bias reversal. In a third experiment, subjects searched for a spot of locally increased contrast (“oddity”) instead of the bull's-eye (“template”). In contrast to the other conditions, a slight sensory-driven free-viewing bias prevails in this condition. In a fourth experiment, we demonstrate that at known locations template targets are detected faster than oddity targets, suggesting that the former induce a stronger top-down drive when used as search targets. Taken together, task-demands can override sensory-driven saliency in complex visual stimuli almost immediately, and the extent of overriding depends on the search target and the overridden feature, but not on the latter's free-viewing saliency

    Cortical Dynamics of Contextually-Cued Attentive Visual Learning and Search: Spatial and Object Evidence Accumulation

    Full text link
    How do humans use predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, a certain combination of objects can define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. A neural model, ARTSCENE Search, is developed to illustrate the neural mechanisms of such memory-based contextual learning and guidance, and to explain challenging behavioral data on positive/negative, spatial/object, and local/distant global cueing effects during visual search. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined by enhancing target-like objects in space as a scene is scanned with saccadic eye movements. The model clarifies the functional roles of neuroanatomical, neurophysiological, and neuroimaging data in visual search for a desired goal object. In particular, the model simulates the interactive dynamics of spatial and object contextual cueing in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortical cells (area 46) prime possible target locations in posterior parietal cortex based on goalmodulated percepts of spatial scene gist represented in parahippocampal cortex, whereas model ventral prefrontal cortical cells (area 47/12) prime possible target object representations in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex. The model hereby predicts how the cortical What and Where streams cooperate during scene perception, learning, and memory to accumulate evidence over time to drive efficient visual search of familiar scenes.CELEST, an NSF Science of Learning Center (SBE-0354378); SyNAPSE program of Defense Advanced Research Projects Agency (HR0011-09-3-0001, HR0011-09-C-0011

    Attending to multiple objects : the dynamics of attentional control in multi-target stimulus arrays

    Get PDF
    In this thesis, the cognitive and neural mechanisms of attentional control are examined, with a specific focus on investigating the temporal dynamics of these mechanisms in scenarios where multiple objects must be attended. Event-related potential (ERP) measures are used to track the continuous time course of visual responses in the brain, and the N2pc component is employed as a marker for the attentional selection of target objects. Two broad lines of research are presented. The first line examines the attentional selection of multiple rapidly presented instances of a single target object defined by varying properties, revealing very rapid and flexible brain responses triggered independently to the appearance of each target. The second line investigates the speed and qualitative nature of strictly serial attention shifts when they are guided by stimulus features or only by location information, revealing the availability of different attentional control mechanisms for these different shifts. In the context of these findings, this thesis attempts to improve the cognitive and neural understanding of how attentional control operates. The attentional template, a working memory representation of currently task-relevant properties, is proposed to flexibly allow for the preparatory enhancement of the activity of neurons that respond to these target-defining properties, allowing for the independent allocation of attention to each instance of a target in real-time. The properties that can be maintained by the attentional template are not restricted to being visual in nature, but can consist of more complex semantic and category-related information. Importantly, the experiments of this thesis demonstrate that attentional control is a highly flexible cognitive mechanism that can be rapidly altered on the basis of current goals, and can rapidly influence the processing of incoming visual informatio

    The guidance of spatial attention during visual search for colour combinations and colour configurations

    Get PDF
    Representations of target-defining features (attentional templates) guide the selection of target objects in visual search. We used behavioural and electrophysiological measures to investigate how such search templates control the allocation of attention in search tasks where targets are defined by the combination of two colours or by a specific spatial configuration of these colours. Target displays were preceded by spatially uninformative cue displays that contained items in one or both target-defining colours. Experiments 1 and 2 demonstrated that, during search for colour combinations, attention is initially allocated independently and in parallel to all objects with target-matching colours, but is then rapidly withdrawn from objects that only have one of the two target colours. In Experiment 3, targets were defined by a particular spatial configuration of two colours, and could be accompanied by nontarget objects with a different configuration of the same colours. Attentional guidance processes were unable to distinguish between these two types of objects. Both attracted attention equally when they appeared in a cue display, and both received parallel focal-attentional processing and were encoded into working memory when they were presented in the same target display. Results demonstrate that attention can be guided simultaneously by multiple features from the same dimension, but that these guidance processes have no access to the spatial-configural properties of target objects. They suggest that attentional templates do not represent target objects in an integrated pictorial fashion, but contain separate representations of target-defining features

    The reentry hypothesis: The putative interaction of the frontal eye field, ventrolateral prefrontal cortex, and areas V4, IT for attention and eye movement

    Get PDF
    Attention is known to play a key role in perception, including action selection, object recognition and memory. Despite findings revealing competitive interactions among cell populations, attention remains difficult to explain. The central purpose of this paper is to link up a large number of findings in a single computational approach. Our simulation results suggest that attention can be well explained on a network level involving many areas of the brain. We argue that attention is an emergent phenomenon that arises from reentry and competitive interactions. We hypothesize that guided visual search requires the usage of an object-specific template in prefrontal cortex to sensitize V4 and IT cells whose preferred stimuli match the target template. This induces a feature-specific bias and provides guidance for eye movements. Prior to an eye movement, a spatially organized reentry from occulomotor centers, specifically the movement cells of the frontal eye field, occurs and modulates the gain of V4 and IT cells. The processes involved are elucidated by quantitatively comparing the time course of simulated neural activity with experimental data. Using visual search tasks as an example, we provide clear and empirically testable predictions for the participation of IT, V4 and the frontal eye field in attention. Finally, we explain a possible physiological mechanism that can lead to non-flat search slopes as the result of a slow, parallel discrimination process

    The role of multisensory integration in the bottom-up and top-down control of attentional object selection

    Get PDF
    Selective spatial attention and multisensory integration have been traditionally considered as separate domains in psychology and cognitive neuroscience. However, theoretical and methodological advancements in the last two decades have paved the way for studying different types of interactions between spatial attention and multisensory integration. In the present thesis, two types of such interactions are investigated. In the first part of the thesis, the role of audiovisual synchrony as a source of bottom-up bias in visual selection was investigated. In six out of seven experiments, a variant of the spatial cueing paradigm was used to compare attentional capture by visual and audiovisual distractors. In another experiment, single-frame search arrays were presented to investigate whether multisensory integration can bias spatial selection via salience-based mechanisms. Behavioural and electrophysiological results demonstrated that the ability of visual objects to capture attention was enhanced when they were accompanied by noninformative auditory signals. They also showed evidence for the bottom-up nature of these audiovisual enhancements of attentional capture by revealing that these enhancements occurred irrespective of the task-relevance of visual objects. In the second part of this thesis, four experiments are reported that investigated the spatial selection of audiovisual relative to visual objects and the guidance of their selection by bimodal object templates. Behavioural and ERP results demonstrated that the ability of task-irrelevant target-matching visual objects to capture attention was reduced during search for audiovisual as compared to purely visual targets, suggesting that bimodal search is guided by integrated audiovisual templates. However, the observation that unimodal targetmatching visual events retained some ability to capture attention indicates that bimodal search is controlled to some extent by modality-specific representations of task-relevant information. In summary, the present thesis has contributed to our knowledge of how attention is controlled in real-life environments by demonstrating that spatial selective attention can be biased towards bimodal objects via salience-driven as well as goal-based mechanisms

    Space-based and feature-based attentional selection in perception and working memory

    Get PDF
    In order to manage the high amount of sensory input we experience, attention processes enable the selective prioritization of goal-relevant information over irrelevant distractions. Two fundamental ways in which this is accomplished is by focusing attention at particular locations in the environment (spatial attention) or by focusing on specific forms of information (feature-based attention). Despite many decades of research examining these mechanisms, however, they have been seldom directly compared particularly in relation to their underlying neural mechanisms. In this thesis, the neural correlates of spatial and feature-based attentional selection for perception and working memory maintenance processes are contrasted. Event-related potential (ERP) components from electroencephalography (EEG) recordings are used as markers of such processes. The N2pc component is used to measure lateralised attentional selection to targets defined by one or a combination of spatial locations and features in perceptual tasks, whilst the CDA component is used to measure the active maintenance of target objects/locations in working memory tasks. In total, this thesis contains three lines of investigation. The first line compares these ERP components for attentional selection to targets defined by spatial locations and features and reveals that in many contexts, spatial attention is processed similarly to featural attention with a few notable exceptions (Chapter 2). The second line of enquiry examines how spatial configural information affects feature-based attentional selection when it is a critical component for successful goal-directed search, revealing that such information can guide attentional selection for some feature dimensions (Chapter 3). Finally, the third line of enquiry compares how spatial and feature-based attention influences visual perceptual and post-perceptual working memory processes (Chapters 4 and 5). This investigation lead to the observations that spatial attentional templates are quicker to guide attention when there is no SOA between the cue and target display onset, and that the two types of attention have similar working memory capacity limitations These findings culminate to provide one of the first direct comparisons of the neural correlates of attention to spatially or featurally-defined information, thereby expanding the current understanding of how spatial/feature-based attention operates. By measuring real-time event-related responses during these task contexts, the present thesis highlights the independent nature of spatial and feature-based attention and their qualitative similarities, but also how they interact upon one another under some circumstances. The findings aid the literature by shedding light on the argument perceptual and post-perceptual processes involved in spatial attention are qualitatively different from featural attention processes

    Terms of debate: consensus definitions to guide the scientific discourse on visual distraction

    Get PDF
    Hypothesis-driven research rests on clearly articulated scientific theories. The building blocks for communicating these theories are scientific terms. Obviously, communication – and thus, scientific progress – is hampered if the meaning of these terms varies idiosyncratically across (sub)fields and even across individual researchers within the same subfield. We have formed an international group of experts representing various theoretical stances with the goal to homogenize the use of the terms that are most relevant to fundamental research on visual distraction in visual search. Our discussions revealed striking heterogeneity and we had to invest much time and effort to increase our mutual understanding of each other’s use of central terms, which turned out to be strongly related to our respective theoretical positions. We present the outcomes of these discussions in a glossary and provide some context in several essays. Specifically, we explicate how central terms are used in the distraction literature and consensually sharpen their definitions in order to enable communication across theoretical standpoints. Where applicable, we also explain how the respective constructs can be measured. We believe that this novel type of adversarial collaboration can serve as a model for other fields of psychological research that strive to build a solid groundwork for theorizing and communicating by establishing a common language. For the field of visual distraction, the present paper should facilitate communication across theoretical standpoints and may serve as an introduction and reference text for newcomers

    Specificity and coherence of body representations

    Get PDF
    Bodily illusions differently affect body representations underlying perception and action. We investigated whether this task dependence reflects two distinct dimensions of embodiment: the sense of agency and the sense of the body as a coherent whole. In experiment 1 the sense of agency was manipulated by comparing active versus passive movements during the induction phase in a video rubber hand illusion (vRHI) setup. After induction, proprioceptive biases were measured both by perceptual judgments of hand position, as well as by measuring end-point accuracy of subjects' active pointing movements to an external object with the affected hand. The results showed, first, that the vRHI is largely perceptual: passive perceptual localisation judgments were altered, but end-point accuracy of active pointing responses with the affected hand to an external object was unaffected. Second, within the perceptual judgments, there was a novel congruence effect, such that perceptual biases were larger following passive induction of vRHI than following active induction. There was a trend for the converse effect for pointing responses, with larger pointing bias following active induction. In experiment 2, we used the traditional RHI to investigate the coherence of body representation by synchronous stimulation of either matching or mismatching fingers on the rubber hand and the participant's own hand. Stimulation of matching fingers induced a local proprioceptive bias for only the stimulated finger, but did not affect the perceived shape of the hand as a whole. In contrast, stimulation of spatially mismatching fingers eliminated the RHI entirely. The present results show that (i) the sense of agency during illusion induction has specific effects, depending on whether we represent our body for perception or to guide action, and (ii) representations of specific body parts can be altered without affecting perception of the spatial configuration of the body as a whole
    corecore