34 research outputs found

    The centroid paradigm: Quantifying feature-based attention in terms of attention filters

    Full text link
    This paper elaborates a recent conceptualization of feature-based attention in terms of attention filters (Drew et al., Journal of Vision, 10(10:20), 1-16, 2010) into a general purpose centroid-estimation paradigm for studying feature-based attention. An attention filter is a brain process, initiated by a participant in the context of a task requiring feature-based attention, which operates broadly across space to modulate the relative effectiveness with which different features in the retinal input influence performance. This paper describes an empirical method for quantitatively measuring attention filters. The method uses a "statistical summary representation" (SSR) task in which the participant strives to mouse-click the centroid of a briefly flashed cloud composed of items of different types (e.g., dots of different luminances or sizes), weighting some types of items more strongly than others. In different attention conditions, the target weights for different item types in the centroid task are varied. The actual weights exerted on the participant's responses by different item types in any given attention condition are derived by simple linear regression. Because, on each trial, the centroid paradigm obtains information about the relative effectiveness of all the features in the display, both target and distractor features, and because the participant's response is a continuous variable in each of two dimensions (versus a simple binary choice as in most previous paradigms), it is remarkably powerful. The number of trials required to estimate an attention filter is an order of magnitude fewer than the number required to investigate much simpler concepts in typical psychophysical attention paradigms

    Étude des minéraux de terres rares par absorption neutronique

    No full text

    Analogous intermediate shape coding in vision and touch

    No full text
    We recognize, understand, and interact with objects through both vision and touch. Conceivably, these two sensory systems encode object shape in similar ways, which could facilitate cross-modal communication. To test this idea, we studied single neurons in macaque monkey intermediate visual (area V4) and somatosensory (area SII) cortex, using matched shape stimuli. We found similar patterns of shape sensitivity characterized by tuning for curvature direction. These parallel tuning patterns imply analogous shape coding mechanisms in intermediate visual and somatosensory cortex
    corecore