Location of Repository

Time to guide: evidence for delayed attentional guidance in contextual cueing \ud

By Melina A. Kunar, Stephen J. Flusberg and Jeremy M Wolfe

Abstract

Contextual cueing experiments show that, when displays are repeated, reaction times (RTs) to find a target decrease over time even when the observers are not aware of the repetition. Recent evidence suggests that this benefit in standard contextual cueing tasks is not likely to be due to an improvement in attentional guidance (Kunar, Flusberg, Horowitz, & Wolfe, 2007). Nevertheless, we ask whether guidance can help participants find the target in a repeated display, if they are given sufficient time to encode the display. In Experiment 1 we increased the display complexity so that it took participants longer to find the target. Here we found a larger effect of guidance than in a condition with shorter RTs. Experiment 2 gave participants prior exposure to the display context. The data again showed that with more time participants could implement guidance to help find the target, provided that there was something in the search stimuli locations to guide attention to. The data suggest that, although the benefit in a standard contextual cueing task is unlikely to be a result of guidance, guidance can play a role if it is given time to develop

Topics: BF
Publisher: Taylor & Francis
Year: 2008
OAI identifier: oai:wrap.warwick.ac.uk:454

Suggested articles

Preview

Citations

  1. (2005). A doi
  2. (1980). A feature-integration theory of attention. doi
  3. (2001). Attentional guidance of the eyes by contextual information and abrupt onsets. doi
  4. (2006). Contextual cueing by global features. doi
  5. (2000). Contextual cueing of visual attention. doi
  6. (1998). Contextual cueing: implicit learning and memory of visual context guides spatial attention. doi
  7. (2007). Does contextual cueing guide the deployment of attention? doi
  8. (2005). Implicit learning of ignored visual context. doi
  9. (2003). Implicit, long-term spatial contextual memory. doi
  10. (2005). Local contextual cuing in visual search. doi
  11. (2004). Oculomotor correlates of context-guided learning in visual search. doi
  12. (2004). Panoramic search: The interaction of memory and vision in search through a familiar scene. doi
  13. (1972). Perceiving real-world scenes. doi
  14. (2002). Perceptual constraints on implicit learning of spatial context. doi
  15. (2000). Postattentive vision. doi
  16. (2006). Recognition and attention guidance during contextual cueing in real-world scenes: Evidence from eye movements. doi
  17. (1984). Searching for conjunctively defined targets. doi
  18. (2002). Segmentation 30 of objects from backgrounds in visual search tasks. doi
  19. (2001). Selective Attention Modulates Implicit Learning. doi
  20. (2004). Selective learning of spatial configuration and object identity in visual search. doi
  21. (2004). Spatial context and top-down strategies in visual search. doi
  22. The role of memory and restricted context in repeated visual search. Perception & Psychophysics. doi
  23. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies, doi
  24. (1999). Top-down Attentional Guidance Based on Implicit Learning of Visual Covariation. doi
  25. (2006). Using real-world scenes as contextual cues for search. doi
  26. (2004). What is learned in spatial contextual cuing – configuration or individual locations? Perception & doi
  27. (2005). When we use context in contextual cueing: Evidence from multiple target locations. doi

To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.