3 research outputs found

    What a Handful! Electrophysiological Characterization of Sensory and Cognitive Biases on Spatial Attention and Visual Processing

    No full text
    Attention uses sensory inputs and goals to select information from our environment. Monkey electrophysiological literature demonstrates that visuo-tactile bimodal neurons (respond to visual and tactile stimuli presented on/near the hand) facilitate multisensory integration. Human behavioral studies show that hand position/function bias visual attention. Event-related potentials (ERPs) reveal the cortical dynamics coordinating visual inputs, body position, and action goals. Early, sensory ERPs (N1) indicate multisensory integration. Later, cognitive ERPs (P3) reflect task-related processing. Study 1 investigates a discrepancy between monkey and human literatures. Monkey studies demonstrate bimodal neuron responses equidistantly around the whole hand, but human studies demonstrate attentional bias for grasping space. In a visual detection paradigm, participants positioned their hand so target and non-target stimuli appeared near the palm or back of the hand; ERPs were measured. N1 components indicated no amplitude differences between Palm vs. Back conditions, but P3 components revealed greater target vs. non-target differentiation for Palm conditions. Results suggest cortical timing underlies grasping vs. whole hand bias differences: early processing does not differentiate using hand function, but cognitive processing does when stimuli are discriminated for action. Study 2 investigates whether proprioceptive inputs facilitate visual processing. In a visual detection paradigm, participants viewed stimuli presented between occluders blocking view of a hand positioned either near or far from the stimuli. N1 amplitudes were similar for near and far conditions, but P3 amplitudes for target/non-target differences were accentuated for near conditions. Proprioceptive effects emerge later in processing. ERP reveals the cortical dynamics underlying hand position effects on vision

    Holistic Processing for Bodies and Body Parts: New Evidence from Stereoscopic Depth Manipulations

    No full text
    Although holistic processing has been documented extensively for upright faces, it is unclear whether it occurs for other visual categories with more extensive substructure, such as body postures. Like faces, body postures have high social relevance, but they differ in having fine-grain organization not only of basic parts (e.g., arm) but also subparts (e.g., elbow, wrist, hand). To compare holistic processing for whole bodies and body parts, we employed a novel stereoscopic depth manipulation that creates either the percept of a whole body occluded by a set of bars, or of segments of a body floating in front of a background. Despite sharing low-level visual properties, only the stimulus perceived as being behind bars should be holistically “filled in” via amodal completion. In two experiments, we tested for better identification of individual body parts within the context of a body versus in isolation. Consistent with previous findings, recognition of body parts was better in the context of a whole body when the body was amodally completed behind occluders. However, when the same bodies were perceived as floating in strips, performance was significantly worse, and not significantly different, from that for amodally completed parts, supporting holistic processing of body postures. Intriguingly, performance was worst for parts in the frontal depth condition, suggesting that these effects may extend from gross body organization to a more local level. These results provide suggestive evidence that holistic representations may not be “all-or-none,” but rather also operate on body regions of more limited spatial extent
    corecore