To control our actions efficiently, our brain represents our body based on a combination of visual and proprioceptive cues,
weighted according to how (un)reliable—how precise—each respective modality is in a given context. However, perceptual
experiments in other modalities suggest that the weights assigned to sensory cues are also modulated “top-down” by
attention. Here, we asked whether during action, attention can likewise modulate the weights (i.e., precision) assigned to
visual versus proprioceptive information about body position. Participants controlled a virtual hand (VH) via a data glove,
matching either the VH or their (unseen) real hand (RH) movements to a target, and thus adopting a “visual” or
“proprioceptive” attentional set, under varying levels of visuo-proprioceptive congruence and visibility. Functional magnetic
resonance imaging (fMRI) revealed increased activation of the multisensory superior parietal lobe (SPL) during the VH task
and increased activation of the secondary somatosensory cortex (S2) during the RH task. Dynamic causal modeling (DCM)
showed that these activity changes were the result of selective, diametrical gain modulations in the primary visual cortex
(V1) and the S2. These results suggest that endogenous attention can balance the gain of visual versus proprioceptive brain
areas, thus contextualizing their inf luence on multisensory areas representing the body for action