20 research outputs found

    Dissociating goal-directed and stimulus-driven determinants in attentional capture

    Get PDF
    postprintThe 7th Asia-Pacific Conference on Vision [亞太視覺會議] (APCV 2011), Hong Kong, 15-18 July 2011. In i-Perception, 2011, v. 2 n. 4, p. 32

    Size Matters: Large Objects Capture Attention in Visual Search

    Get PDF
    Can objects or events ever capture one's attention in a purely stimulus-driven manner? A recent review of the literature set out the criteria required to find stimulus-driven attentional capture independent of goal-directed influences, and concluded that no published study has satisfied that criteria. Here visual search experiments assessed whether an irrelevantly large object can capture attention. Capture of attention by this static visual feature was found. The results suggest that a large object can indeed capture attention in a stimulus-driven manner and independent of displaywide features of the task that might encourage a goal-directed bias for large items. It is concluded that these results are either consistent with the stimulus-driven criteria published previously or alternatively consistent with a flexible, goal-directed mechanism of saliency detection

    Explaining efficient search for conjunctions of motion and form: Evidence from negative color effects

    Get PDF
    Dent, Humphreys, and Braithwaite (2011) showed substantial costs to search when a moving target shared its color with a group of ignored static distractors. The present study further explored the conditions under which such costs to performance occur. Experiment 1 tested whether the negative color-sharing effect was specific to cases in which search showed a highly serial pattern. The results showed that the negative color-sharing effect persisted in the case of a target defined as a conjunction of movement and form, even when search was highly efficient. In Experiment 2, the ease with which participants could find an odd-colored target amongst a moving group was examined. Participants searched for a moving target amongst moving and stationary distractors. In Experiment 2A, participants performed a highly serial search through a group of similarly shaped moving letters. Performance was much slower when the target shared its color with a set of ignored static distractors. The exact same displays were used in Experiment 2B; however, participants now responded "present" for targets that shared the color of the static distractors. The same targets that had previously been difficult to find were now found efficiently. The results are interpreted in a flexible framework for attentional control. Targets that are linked with irrelevant distractors by color tend to be ignored. However, this cost can be overridden by top-down control settings. © 2014 Psychonomic Society, Inc

    Object-based attention is "turned off" by top-down control

    No full text
    link_to_subscribed_fulltex

    No attentional capture for target detection: it occurs exclusively in compound search

    No full text
    Vision: the Journal of the Vision Society of Japan, v.22 suppl. is proceedings of Asia-Pacific Conference on Vision (2010)It has been believed that simple visual features are detected preattentively. If this description is strictly true, one should not expect attentional capture, in which attention is driven away from the target by a salient distractor, to impair performance. Consistent with this, attentional capture is generally reported only in compound search, which requires attention to be focused on the target in order to judge the response. It has been recently reported, however, that attentional capture can be produced in detection by mixing distractor trials with no-distractor trials. In this study, in a similar setting, we measured attentional capture in terms of accuracy. If detection requires attention, attentional capture should render search less accurate; however, accuracy should not be influenced by other factors, such as a slowing down in response production. We presented brief search displays in which duration was set so that accuracy was near 0.8. Results show attentional capture in compound search, but not in detection. Therefore, attention does not enhance the registering of a simple feature in the same way that it enhances compound search performance. The present results are consistent with a proposal (Chan & Hayward, 2009, JEP:HPP) that feature detection and localization involve distinct search processes.The 6th Asia-Pacific Conference on Vision (APCV 2010), Taipei, Taiwan, 23-26 July 2010. In Vision, 2010, v. 22 suppl., p. 33, abstract no. 21.0

    Electropalatographic pattern of Cantonese speech

    No full text

    Feature Integration Theory Revisited: Dissociating Feature Detection and Attentional Guidance in Visual Search

    No full text
    In feature integration theory (FIT; A. Treisman & S. Sato, 1990), feature detection is driven by independent dimensional modules, and other searches are driven by a master map of locations that integrates dimensional information into salience signals. Although recent theoretical models have largely abandoned this distinction, some observed results are difficult to explain in its absence. The present study measured dimension-specific performance during detection and localization, tasks that require operation of dimensional modules and the master map, respectively. Results showed a dissociation between tasks in terms of both dimension-switching costs and cross-dimension attentional capture, reflecting a dimension-specific nature for detection tasks and a dimension-general nature for localization tasks. In a feature-discrimination task, results precluded an explanation based on response mode. These results are interpreted to support FIT's postulation that different mechanisms are involved in parallel and focal attention searches. This indicates that the FIT architecture should be adopted to explain the current results and that a variety of visual attention findings can be addressed within this framework. © 2009 American Psychological Association.link_to_subscribed_fulltex

    Sensitivity to attachment, alignment, and contrast polarity variation in local perceptual grouping

    No full text
    A number of leading theories (e.g., Grossberg & Mingolla, 1985; Kellman & Shipley, 1991 ; Rensink & Enns, 1995) commonly assume that perceptual grouping by contour alignment occurs preattentively across reversing contrast polarity elements. We examined this notion in seven visual search experiments. We found that only grouping by attachment supported preattentive visual search and that grouping by contour alignment required attention in order to operate. Both attachment grouping and grouping by contour alignment were sensitive to contrast reversals. Further results showed that contour alignment was a strong grouping cue only among elements with the same contrast sign but that it did not facilitate grouping across reversing contrast. These results suggest that grouping by contour alignment operates only on inputs of consistent contrast polarity. © 2009 The Psychonomic Society, Inc.link_to_subscribed_fulltex

    Feature Integration Theory Revisited: Dissociating Feature Detection and Attentional Guidance in Visual Search

    Get PDF
    In feature integration theory (FIT; A. Treisman & S. Sato, 1990), feature detection is driven by independent dimensional modules, and other searches are driven by a master map of locations that integrates dimensional information into salience signals. Although recent theoretical models have largely abandoned this distinction, some observed results are difficult to explain in its absence. The present study measured dimension-specific performance during detection and localization, tasks that require operation of dimensional modules and the master map, respectively. Results showed a dissociation between tasks in terms of both dimension-switching costs and cross-dimension attentional capture, reflecting a dimension-specific nature for detection tasks and a dimension-general nature for localization tasks. In a feature-discrimination task, results precluded an explanation based on response mode. These results are interpreted to support FIT's postulation that different mechanisms are involved in parallel and focal attention searches. This indicates that the FIT architecture should be adopted to explain the current results and that a variety of visual attention findings can be addressed within this framework. © 2009 American Psychological Association.link_to_subscribed_fulltex

    Development of Cantonese Speech and tone viewer

    No full text
    corecore