2,217 research outputs found

    Cortical Dynamics of Contextually-Cued Attentive Visual Learning and Search: Spatial and Object Evidence Accumulation

    Full text link
    How do humans use predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, a certain combination of objects can define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. A neural model, ARTSCENE Search, is developed to illustrate the neural mechanisms of such memory-based contextual learning and guidance, and to explain challenging behavioral data on positive/negative, spatial/object, and local/distant global cueing effects during visual search. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined by enhancing target-like objects in space as a scene is scanned with saccadic eye movements. The model clarifies the functional roles of neuroanatomical, neurophysiological, and neuroimaging data in visual search for a desired goal object. In particular, the model simulates the interactive dynamics of spatial and object contextual cueing in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortical cells (area 46) prime possible target locations in posterior parietal cortex based on goalmodulated percepts of spatial scene gist represented in parahippocampal cortex, whereas model ventral prefrontal cortical cells (area 47/12) prime possible target object representations in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex. The model hereby predicts how the cortical What and Where streams cooperate during scene perception, learning, and memory to accumulate evidence over time to drive efficient visual search of familiar scenes.CELEST, an NSF Science of Learning Center (SBE-0354378); SyNAPSE program of Defense Advanced Research Projects Agency (HR0011-09-3-0001, HR0011-09-C-0011

    Time to guide: evidence for delayed attentional guidance in contextual cueing

    Get PDF
    Contextual cueing experiments show that, when displays are repeated, reaction times (RTs) to find a target decrease over time even when the observers are not aware of the repetition. Recent evidence suggests that this benefit in standard contextual cueing tasks is not likely to be due to an improvement in attentional guidance (Kunar, Flusberg, Horowitz, & Wolfe, 2007). Nevertheless, we ask whether guidance can help participants find the target in a repeated display, if they are given sufficient time to encode the display. In Experiment 1 we increased the display complexity so that it took participants longer to find the target. Here we found a larger effect of guidance than in a condition with shorter RTs. Experiment 2 gave participants prior exposure to the display context. The data again showed that with more time participants could implement guidance to help find the target, provided that there was something in the search stimuli locations to guide attention to. The data suggest that, although the benefit in a standard contextual cueing task is unlikely to be a result of guidance, guidance can play a role if it is given time to develop

    Negative emotional stimuli reduce contextual cueing but not response times in inefficient search

    Get PDF
    In visual search, previous work has shown that negative stimuli narrow the focus of attention and speed reaction times (RTs). This paper investigates these two effects by first asking whether negative emotional stimuli narrow the focus of attention to reduce the learning of a display context in a contextual cueing task and, second, whether exposure to negative stimuli also reduces RTs in inefficient search tasks. In Experiment 1, participants viewed either negative or neutral images (faces or scenes) prior to a contextual cueing task. In a typical contextual cueing experiment, RTs are reduced if displays are repeated across the experiment compared with novel displays that are not repeated. The results showed that a smaller contextual cueing effect was obtained after participants viewed negative stimuli than when they viewed neutral stimuli. However, in contrast to previous work, overall search RTs were not faster after viewing negative stimuli (Experiments 2 to 4). The findings are discussed in terms of the impact of emotional content on visual processing and the ability to use scene context to help facilitate search

    Dynamics of perceptual learning in visual search

    Get PDF
    The present work is concerned with a phenomenon referred to as contextual cueing. In visual search, if a searched-for target object is consistently encountered within a stable spatial arrangement of distractor objects, detecting the target becomes more efficient over time, relative to non-repeated, random arrangements. This effect is attributed to learned target-distractor spatial associations stored in long-term memory, which expedite visual search. This Thesis investigates four aspects of contextual cueing: Study 1 tackled the implicit-explicit debate of contextual cueing from a new perspective. Previous studies tested explicit access to learned displays by applying a recognition test, asking observers whether they have seen a given display in the previous search task. These tests, however, typically yield mixed findings and there is an on-going controversy whether contextual cueing can be described as an implicit or an explicit effect. The current study applied the new perspective of metacognition to contextual cueing and combined a contextual cueing task with metacognitive ratings about the clarity of the visual experience, either of the display configuration or the target stimulus. Bayesian analysis revealed that there was an effect of repeated context on metacognitive sensitivity for configuration, but not target, ratings. It was concluded that effects of contextual memory on metacognition are content-specific and lead to increased metacognitive access to the display configuration, but not to the target stimulus. The more general implication is that from the perspective of metacognition, contextual cueing can be considered as an explicit effect. Study 2 aimed at testing how explicit knowledge affects memory-guided visual search. Two sets of search displays were shown to participants: explicit and implicit displays. Explicit displays were introduced prior to the search experiment, in a dedicated learning session, and observers should deliberately learn these displays. Implicit displays, on the other hand, were first shown in the search experiment and learning was incidental through repeated exposure to these displays. Contextual cueing arising from explicit and implicit displays was assessed relative to a baseline condition of non-repeated displays. The results showed a standard contextual cueing effect for explicit displays and, interestingly, a negative cueing effect for implicit displays. Recognition performance was above chance for both types of repeated displays; however, it was higher for explicit displays. This pattern of results confirmed – in part – the predictions of a single memory model of attention-moderated associative learning, in which different display types compete for behavior and explicit representations block the retrieval of implicit representations. Study 3 investigates interactions between long-term contextual memory with short-term perceptual hypotheses. Both types of perceptual memory share high similarities with respect to their content, therefore the hypothesis was formulated that they share a common memory resource. In three experiments of interrupted search with repeated and non-repeated displays, it was shown that contextual cueing expedites performance in interrupted search; however, there was no interaction of contextual cueing with the generation or the confirmation of perceptual hypotheses. Rather, the analysis of fixational eye movements showed that long-term memory exerts its influence on search performance upon the first glance of a given display, essentially affecting the starting point of the search process. The behavior of approaching the target stimulus is then a product of generating and confirming perceptual hypotheses with these processes being unaffected by long-term contextual memory. It was concluded that long-term and short-term memory representations of the same search display are independent and exhibit additive effects on search performance. Study 4 is concerned with the effects of reward on perceptual learning. It was argued that rewarding repeated displays in a contextual cueing paradigm leads to an acceleration of the learning effect; however, it was not considered whether reward also has an effect in non-repeated displays. In these displays, at least the target position is kept constant while distractor configurations are random across repetitions. Usually this is done in order to account for target position-specific probability learning in contextual cueing. However, it is possible that probability learning itself is modulated by reward. The current experiment introduced high or low reward to repeated and importantly, also non-repeated displays. It was shown that reward had a huge effect on non-repeated displays, indicating that rewarding certain target positions, irrespective of the distractor layout, facilitates RT performance. Interestingly, reward effects were even larger for non-repeated compared to repeated displays. It was concluded that reward has a strong effect on probability-, and not context learning

    How the visual environment shapes attention: The role of context in attention guidance

    Get PDF
    In our environment, visual stimuli typically appear within the context of other stimuli, which are usually not arranged randomly but follow regularities. These regularities can be very useful for the visual system to overcome the problem of limited encoding capacity by guiding attention to stimuli which are relevant for behavior. There is growing evidence that observers use repeated contexts for guiding attention in visual search, and there is evidence that observers adapt to dynamical changes in their visual environment. However, contexts in our natural environment often come with features predicting reward, and little is known about the influence of such reward-predicting contexts on attention guidance. In addition, it is unclear how observers adapt their behavior to context features that are not relevant for the task, and little is known about individual differences in the effects of contexts. These research gaps are addressed in the present dissertation. In five studies, the present dissertation investigates how different types of contextual regularities are integrated into behavior and how these regularities guide visual attention. Study I showed that observers use knowledge they have acquired in former encounters with similar scenes to predict the most promising item to attend to in an upcoming scene. In a visual search task in the laboratory, participants responded faster in visual contexts that repeated compared to contexts that were novel. In addition, they also moved their eyes more efficiently to the target when they encountered repeated contexts. These results suggest that participants use repeated visual contexts to learn to predict the target location. Study I also revealed that visual contexts are especially used for specifying promising items when they predict a high reward. Context features predicting a high reward boosted the performance advantages observed with repeated contexts. This result suggests that the prediction of reward facilitates the generation of expectations about potential target locations. Study II demonstrated that expectations about potential target locations were quite persistent, since performance benefits were observed even after many encounters with repeated contexts. Further experiments showed that participants could use even a very limited part of the visual contexts to learn to predict the target location (Study III) and that observers used also contexts that changed dynamically for specifying promising items to attend to (Study IV). These results suggest that observers use regularities in the visual context to generate expectations about promising items in their visual environment. Finally, the last study of this dissertation (Study V) investigated how contexts of social perception are used for specifying relevant visual information. Results showed that observers differ in how they use contexts for specifying relevant visual information and suggested that an observer’s personality might be one factor explaining these differences. In sum, the five studies of the present dissertation demonstrate that the visual system is remarkably sensitive to regularities in the visual context. It is quite efficient in extracting repeated contexts to guide attention to relevant locations when contexts are encountered again (Studies I and II), and it only needs a very limited amount of repeating contextual information to take advantage from the contexts (Study III). It also considers rewards that are signaled by features of the contexts to prioritize processing of high reward contexts. The visual system further adapts to dynamical changes in the contexts (Study IV) and uses contexts of social perception for prioritizing information, dependent on the observer’s personality (Study V). The present dissertation thus highlights that the visual context is crucial for guiding our attention in numerous situations that we encounter every day. Fortunately, we can take advantage of the visual context, which allows our visual system to cope with its limited processing capacity

    Bi-directional relationship between attention and long-term context memory

    Get PDF
    This dissertation presents four empirical studies investigating the link between visual attention and long-term memory. Long-term memory in visual search is acquired by repeated exposure to invariant spatial configurations and expressed by expedited visual search in repeated over non-repeated displays (i.e., contextual cueing paradigm). The memory of repeated (invariant) displays is considered to be implicit. The present studies aimed to contribute to a better understanding of how visual attention and long-term context memory interact with each other using reaction time and eye tracking measures. Study 1: Previous studies revealed that unpredictable target location changes impair contextual cueing, and the cueing-related gains in reaction times recover slowly with extensive training on the relocated displays. Study 1 examined whether other forms of attention guidance i.e., spatial grouping, play a role on the adaptation of context memory. For this reason, after the learning of target-distractor arrangements, we re-positioned the target in two different local contexts: local-sparse (consisting of one distractor item around the target) or local-dense (consisting of three distractors around the target) contexts. The results revealed successful adaptation to a new target location when the target was replaced in local-sparse, but not local-dense, regions. It was concluded that spatial grouping of the dense items makes this region salient in a sense that bottom-up attention is effectively guided towards the target region. The lack-of-adaptation of contextual cueing reported in earlier studies reflects not a mere inability of the cueing memory for adaptation. Instead, it suggests that both stimulus- and memory-based processes contribute to target detection. Study 2: The dependency and independency of contextual cueing from a secondary working memory (WM) load was investigated in Study 2. In former studies, it was shown that contextual learning is independent of divided attention. Study 2 re-investigated the role of divided attention in both context learning and the expression of learned contexts, and further examined whether the influence of WM load is due to the load on spatial or executive WM capabilities. In the experiments, in order to distinguish between different stages of learning, a visual search task was combined with a secondary WM load either in the early or in the late phases of the experiments. To test whether disadvantageous WM effects result from spatial or executive WM load, observers were either given a task to maintain spatial WM items concurrently with a visual search task (aiming to unravel both the effects of spatial and executive WM), or a task where WM was performed before or after the visual search task, without a task overlap (aiming to test the effects from executive WM load).The findings revealed reduced contextual cueing under a spatial WM load and this effect was larger for the expression of learned configural associations. No interference was found when the secondary WM task was performed in a non-overlapping manner. It is concluded that the retrieval of context representations from long-term memory is dependent on spatial WM, i.e., divided attention. Study 3: The possibility remains that contextual cueing is independent from divided attention. This issue was investigated in Study 3. Previously it was shown that visual search improves with task practice and this practice-related gain depends on the characteristics of a given task. Study 3 asked whether automaticity of contextual cueing can be enhanced until a level at which it becomes independent of attentional resources. In order to achieve this, a single (visual search), and a dual (visual search together with a secondary spatial WM) task were presented in close succession in individual blocks of trials. This procedure has been shown to facilitate the development of automaticity in visual search. The results revealed reliable contextual cueing under a demanding spatial WM task. It is concluded that the automaticity of contextual cueing retrieval has a modulatory effect on whether a spatial WM load task exerts a detrimental effect on the memory-guided visual search or not. Study 4: Memory for contextual cueing was considered to be implicit. However, recent studies questioned the notion of implicit contextual memory both on theoretical and methodological grounds. It was claimed that contextual cueing may rely on either a single (incidentally acquired memory but can be accessible via explicit recognition tasks), or a two-memory (incidentally acquired but cannot be accessible in conscious reports) system. Study 4 investigated the idea that contextual cueing is initially unconscious but can become conscious later on through the help of focal attention (i.e., fixational eye movements). After the learning of contextual cues, observers’ eye movements were measured in an explicit recognition test, in which they had to judge the quadrant of the target. The results revealed higher fixation dwell times in the target quadrant of the invariant over random displays. Furthermore, manipulations of observers’ gaze in the recognition task showed that fixation dwell times also serve a purposeful role for the conscious retrieval from context memory. At the same time, fixation of the target quadrant was not a requirement of context-based search facilitation. Contextual cueing seems to receive support from at least two independent (automatic and controlled) retrieval processes, and focal attention seems to be the mechanism that links the retrieved information across the two processes

    Context Processing and Aging Older Adults' Ability to Learn and Utilize Visual Contexts

    Get PDF
    The purpose of the present study was to examine how older adults utilize contextual information to guide attention in visual scenes. Studies that have examined context and attentional deployment have used the contextual cueing task. Contextual cueing reflects faster responses to repeated spatial configurations (consistent context-target covaration) than random spatial configurations (inconsistent covariation). Research has shown mixed results in older adults' ability to utilize context with this task. Young (18-23 years) and older (60-85 years) adults were tested in two contextual cuing experiments to assess age differences in how individuals utilize context in novel and real-world visual scenes. Experiment 1 investigated the development of contextual cueing effects using low-meaning visual contexts (letter arrays). In low-meaning arrays, young and older adults were able to use context effeciently with no age differences in the development of contextual cueing effects. Experiment 2 examined older adults' ability to utilize context when context was meaningful (real-world images). Younger and older adults saw real-world images in an upright (meaningful) or inverted (less meaningful) orientation. Older adults were able to use context similarly to younger adults, with no age differences in the development of contextual cueing. Contrary to predictions, context utilization was not impacted by the meaningfulness of the image. Contextual cueing effects occurred at the same time for upright and inverted images for young and older adults. Together, these studies demonstrated that older adults were able to utilize context. Meaningfulness did not provide an additional benefit for older adults, but this was true of young adults

    On the factors causing processing difficulty of multiple-scene displays

    Get PDF
    Multiplex viewing of static or dynamic scenes is an increasing feature of screen media. Most existing multiplex experiments have examined detection across increasing scene numbers, but currently no systematic evaluation of the factors that might produce difficulty in processing multiplexes exists. Across five experiments we provide such an evaluation. Experiment 1 characterises difficulty in change detection when the number of scenes is increased. Experiment 2 reveals that the increased difficulty across multiple-scene displays is caused by the total amount of visual information accounts for differences in change detection times, regardless of whether this information is presented across multiple scenes, or contained in one scene. Experiment 3 shows that whether quadrants of a display were drawn from the same, or different scenes did not affect change detection performance. Experiment 4 demonstrates that knowing which scene the change will occur in means participants can perform at monoplex level. Finally, Experiment 5 finds that changes of central interest in multiplexed scenes are detected far easier than marginal interest changes to such an extent that a centrally interesting object removal in nine screens is detected more rapidly than a marginally interesting object removal in four screens. Processing multiple-screen displays therefore seems dependent on the amount of information, and the importance of that information to the task, rather than simply the number of scenes in the display. We discuss the theoretical and applied implications of these findings

    Simulated loss of foveal vision eliminates visual search advantage in repeated displays

    Get PDF
    In the contextual cueing paradigm, incidental visual learning of repeated distractor configurations leads to faster search times in repeated compared to new displays. This contextual cueing is closely linked to the visual exploration of the search arrays as indicated by fewer fixations and more efficient scan paths in repeated search arrays. Here, we examined contextual cueing under impaired visual exploration induced by a simulated central scotoma that causes the participant to rely on extrafoveal vision. We let normal-sighted participants search for the target either under unimpaired viewing conditions or with a gaze-contingent central scotoma masking the currently fixated area. Under unimpaired viewing conditions, participants revealed shorter search times and more efficient exploration of the display for repeated compared to novel search arrays and thus exhibited contextual cueing. When visual search was impaired by the central scotoma, search facilitation for repeated displays was eliminated. These results indicate that a loss of foveal sight, as it is commonly observed in maculopathies, e.g., may lead to deficits in high-level visual functions well beyond the immediate consequences of a scotoma

    A feedback model of visual attention

    Get PDF
    Feedback connections are a prominent feature of cortical anatomy and are likely to have significant functional role in neural information processing. We present a neural network model of cortical feedback that successfully simulates neurophysiological data associated with attention. In this domain our model can be considered a more detailed, and biologically plausible, implementation of the biased competition model of attention. However, our model is more general as it can also explain a variety of other top-down processes in vision, such as figure/ground segmentation and contextual cueing. This model thus suggests that a common mechanism, involving cortical feedback pathways, is responsible for a range of phenomena and provides a unified account of currently disparate areas of research
    corecore