5,189 research outputs found

    Global Precedence In Visual Search? Not So Fast: Evidence Instead For An Oblique Effect

    Get PDF
    The evidence from an earlier report of global precedence in visual search is reexamined, Two new experiments are reported. The results of the first experiment indicate that the confusability of oblique orientations (a class-2 oblique effect) rather than global precedence was responsible for the earlier results. The results of the second experiment show that the effect critically depends on the presence of heterogeneous distracters rather than on differences in raw processing speed for different spatial scales. The possible role of symmetry is discussed

    Do honeybees detect colour targets using serial or parallel visual search?

    Get PDF

    Cortical Dynamics of Contextually-Cued Attentive Visual Learning and Search: Spatial and Object Evidence Accumulation

    Full text link
    How do humans use predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, a certain combination of objects can define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. A neural model, ARTSCENE Search, is developed to illustrate the neural mechanisms of such memory-based contextual learning and guidance, and to explain challenging behavioral data on positive/negative, spatial/object, and local/distant global cueing effects during visual search. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined by enhancing target-like objects in space as a scene is scanned with saccadic eye movements. The model clarifies the functional roles of neuroanatomical, neurophysiological, and neuroimaging data in visual search for a desired goal object. In particular, the model simulates the interactive dynamics of spatial and object contextual cueing in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortical cells (area 46) prime possible target locations in posterior parietal cortex based on goalmodulated percepts of spatial scene gist represented in parahippocampal cortex, whereas model ventral prefrontal cortical cells (area 47/12) prime possible target object representations in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex. The model hereby predicts how the cortical What and Where streams cooperate during scene perception, learning, and memory to accumulate evidence over time to drive efficient visual search of familiar scenes.CELEST, an NSF Science of Learning Center (SBE-0354378); SyNAPSE program of Defense Advanced Research Projects Agency (HR0011-09-3-0001, HR0011-09-C-0011

    Context-aware Captions from Context-agnostic Supervision

    Full text link
    We introduce an inference technique to produce discriminative context-aware image captions (captions that describe differences between images or visual concepts) using only generic context-agnostic training data (captions that describe a concept or an image in isolation). For example, given images and captions of "siamese cat" and "tiger cat", we generate language that describes the "siamese cat" in a way that distinguishes it from "tiger cat". Our key novelty is that we show how to do joint inference over a language model that is context-agnostic and a listener which distinguishes closely-related concepts. We first apply our technique to a justification task, namely to describe why an image contains a particular fine-grained category as opposed to another closely-related category of the CUB-200-2011 dataset. We then study discriminative image captioning to generate language that uniquely refers to one of two semantically-similar images in the COCO dataset. Evaluations with discriminative ground truth for justification and human studies for discriminative image captioning reveal that our approach outperforms baseline generative and speaker-listener approaches for discrimination.Comment: Accepted to CVPR 2017 (Spotlight

    Sit-and-Wait Strategies in Dynamic Visual Search

    Get PDF
    The role of memory in visual search has lately become a controversial issue. Horowitz and Wolfe (1998) observed that performance in a visual search task was little affected by whether the stimuli were static or randomly relocated every 111 ms. Because a memory-based mechanism, such as inhibition of return, would be of no use in the dynamic condition, Horowitz and Wolfe concluded that memory is likewise not involved in the static condition. However, Horowitz and Wolfe could not effectively rule out the possibility that observers adopted a different strategy in the dynamic condition than in the static condition. That is, in the dynamic condition observers may have attended to a subregion of the display and waited for the target to appear there (sit-and-wait strategy). This hypothesis is supported by experimental data showing that performance in their dynamic condition does not differ from performance in another dynamic condition in which observers are forced to adopt a sit-and-wait strategy by being presented with a limited region of the display only

    Target absent trials in configural contextual cuing

    Get PDF
    In contextual cueing (CC), reaction times to find targets in repeated displays are faster than in displays that have never been seen before. This has been demonstrated using target-distractor configurations, global background colors, naturalistic scenes and the co-variation of target with distractors. The majority of CC studies have used displays where the target is always present. This paper investigates what happens when the target is sometimes absent. Experiment 1 shows that, although configural CC occurs in displays when the target is always present, there is no CC when the target is always absent. Experiment 2 shows that there is no CC when the same spatial layout can be both target present and target absent on different trials. The presence of distractors in locations that contain targets on other trials appears to interfere with CC and even disrupts the expression of previously learned contexts (Experiments 3-5). The results show that it is the target-distractor associations that are important in producing CC and, consistent with a response selection account, changing the response type from an orientation task to a detection task removes the CC effect

    When are abrupt onsets found efficiently in complex visual search? : evidence from multi-element asynchronous dynamic search

    Get PDF
    Previous work has found that search principles derived from simple visual search tasks do not necessarily apply to more complex search tasks. Using a Multielement Asynchronous Dynamic (MAD) visual search task, in which high numbers of stimuli could either be moving, stationary, and/or changing in luminance, Kunar and Watson (M. A Kunar & D. G. Watson, 2011, Visual search in a Multi-element Asynchronous Dynamic (MAD) world, Journal of Experimental Psychology: Human Perception and Performance, Vol 37, pp. 1017-1031) found that, unlike previous work, participants missed a higher number of targets with search for moving items worse than for static items and that there was no benefit for finding targets that showed a luminance onset. In the present research, we investigated why luminance onsets do not capture attention and whether luminance onsets can ever capture attention in MAD search. Experiment 1 investigated whether blinking stimuli, which abruptly offset for 100 ms before reonsetting-conditions known to produce attentional capture in simpler visual search tasks-captured attention in MAD search, and Experiments 2-5 investigated whether giving participants advance knowledge and preexposure to the blinking cues produced efficient search for blinking targets. Experiments 6-9 investigated whether unique luminance onsets, unique motion, or unique stationary items captured attention. The results found that luminance onsets captured attention in MAD search only when they were unique, consistent with a top-down unique feature hypothesis. (PsycINFO Database Record (c) 2013 APA, all rights reserved)

    When are abrupt onsets found efficiently in complex visual search? : evidence from multi-element asynchronous dynamic search

    Get PDF
    Previous work has found that search principles derived from simple visual search tasks do not necessarily apply to more complex search tasks. Using a Multielement Asynchronous Dynamic (MAD) visual search task, in which high numbers of stimuli could either be moving, stationary, and/or changing in luminance, Kunar and Watson (M. A Kunar & D. G. Watson, 2011, Visual search in a Multi-element Asynchronous Dynamic (MAD) world, Journal of Experimental Psychology: Human Perception and Performance, Vol 37, pp. 1017-1031) found that, unlike previous work, participants missed a higher number of targets with search for moving items worse than for static items and that there was no benefit for finding targets that showed a luminance onset. In the present research, we investigated why luminance onsets do not capture attention and whether luminance onsets can ever capture attention in MAD search. Experiment 1 investigated whether blinking stimuli, which abruptly offset for 100 ms before reonsetting-conditions known to produce attentional capture in simpler visual search tasks-captured attention in MAD search, and Experiments 2-5 investigated whether giving participants advance knowledge and preexposure to the blinking cues produced efficient search for blinking targets. Experiments 6-9 investigated whether unique luminance onsets, unique motion, or unique stationary items captured attention. The results found that luminance onsets captured attention in MAD search only when they were unique, consistent with a top-down unique feature hypothesis. (PsycINFO Database Record (c) 2013 APA, all rights reserved)
    corecore