5 research outputs found

    Imaging when acting: picture but not word cues induce action-related biases of visual attention

    Get PDF
    In line with the Theory of Event Coding (Hommel et al., 2001a), action planning has been shown to affect perceptual processing an effect that has been attributed to a so-called intentional weighting mechanism (Wykowska et al., 2009; Memelink and Hommel, 2012), whose functional role is to provide information for open parameters of online action adjustment (Hommel, 2010). The aim of this study was to test whether different types of action representations induce intentional weighting to various degrees. To meet this aim, we introduced a paradigm in which participants performed a visual search task while preparing to grasp or to point. The to-be performed movement was signaled either by a picture of a required action or a word cue. We reasoned that picture cues might trigger a more concrete action representation that would be more likely to activate the intentional weighting of perceptual dimensions that provide information for online action control. In contrast, word cues were expected to trigger a more abstract action representation that would be less likely to induce intentional weighting. In two experiments, preparing for an action facilitated the processing of targets in an unrelated search task if they differed from distractors on a dimension that provided information for online action control. As predicted, however, this effect was observed only if action preparation was signaled by picture cues but not if it was signaled by word cues. We conclude that picture cues are more efficient than word cues in activating the intentional weighting of perceptual dimensions, presumably by specifying not only invariant characteristics of the planned action but also the dimensions of action-specific parameters

    The electrophysiological locus of the redundant target effect on visual discrimination in a dual singleton search task

    No full text
    <p>Task performance can be enhanced by the addition of extra information to a visual environment in which observers search for a target stimulus. One example of such information is the repetition of the searched-for stimulus; a form of target redundancy. In the present study, the electrophysiological correlates of such target redundancy were investigated in a visual discrimination task. Observers were asked to look for targets in displays that always contained two salient singletons (tilted lines; targets and/or non-targets) against a background of vertical distractor lines. Displays contained either two redundant targets, two nontargets, or a single target and nontarget, at opposite sides of the visual field. Search was most efficient when two targets were shown, and effects of target redundancy were observed on the event-related potential as well. Target redundancy modulated the anterior N2, and the P3 in both an early and a late window. The results are compatible with models of visual attention that support a relatively late (i.e., central or decisional) locus of redundancy processing. (C) 2013 Elsevier B.V. All rights reserved.</p>

    Electrophysiological correlates of early attentional feature selection and distractor filtering

    No full text
    <p>Using electrophysiology, the attentional functions of target selection and distractor filtering were investigated during visual search. Observers searched for multiple tilted line segments amidst vertical distractors. In different conditions, observers were either looking for a specific line orientation ("feature-based" selection), or for any tilted line ("salience-based"). The search array could contain both left-and rightward tilted lines simultaneously (requiring spatial filtering) or only one line type (no filtering). The amplitude of the P1 event-related potential component was reduced during feature-based selection, compared to salience-based selection. The N1 showed a similar effect, at least when filtering was required. Amplitudes were also somewhat reduced when competing nontarget stimuli required filtering. Interactions between selection and filtering became stronger on the N2a and P3. When both feature-based selection and filtering were required, N2a amplitude was highest, and P3 amplitude was lowest. The results support an early locus of feature-based attentional selection in multi-item search. (C) 2013 Elsevier B. V. All rights reserved.</p>
    corecore