232 research outputs found
Fronto-parietal brain responses to visuotactile congruence in an anatomical reference frame
Spatially and temporally congruent visuotactile stimulation of a fake hand
together with one’s real hand may result in an illusory self-attribution of
the fake hand. Although this illusion relies on a representation of the two
touched body parts in external space, there is tentative evidence that, for
the illusion to occur, the seen and felt touches also need to be congruent in
an anatomical reference frame. We used functional magnetic resonance imaging
and a somatotopical, virtual reality-based setup to isolate the neuronal basis
of such a comparison. Participants’ index or little finger was synchronously
touched with the index or little finger of a virtual hand, under congruent or
incongruent orientations of the real and virtual hands. The left ventral
premotor cortex responded significantly more strongly to visuotactile co-
stimulation of the same versus different fingers of the virtual and real hand.
Conversely, the left anterior intraparietal sulcus responded significantly
more strongly to co-stimulation of different versus same fingers. Both
responses were independent of hand orientation congruence and of spatial
congruence of the visuotactile stimuli. Our results suggest that fronto-
parietal areas previously associated with multisensory processing within
peripersonal space and with tactile remapping evaluate the congruence of
visuotactile stimulation on the body according to an anatomical reference
frame
(Dis-)attending to the body : action and self-experience in the active inference framework
Endogenous attention is crucial and beneficial for learning, selecting, and supervising actions. However, deliberately attending to action execution usually comes at the cost of decreased smoothness and slower performance, often severely impairs normal functioning, and in the worst case may result in pathological behavior and experience as in schizophrenic hyperreflexivity. These ambiguous modulatory effects of self-directed attention have been examined on phenomenological, computational, and implementational levels of description—a recent formalization within an active inference framework aims to accommodate all of these aspects. Here, I examine the active inference account of motor control as enabled by attentional modulation based on expected precisions of prediction errors in a brain’s hierarchical generative model of the environment. The implications of active inference fit well with a range of empirical results, they resonate well with ideomotor accounts of motor control, and they also tentatively reflect many insights from phenomenological analysis of the “lived body”. Thereby a particular strength of active inference is its hierarchical account of motor control in terms of adaptive behavior driven by the imperative to maintain the organism’s states within unsurprising boundaries. Phenomena ranging from the reflex arc to intentional, goal-directed action and the experience of oneself as an embodied agent are are thus proposed to rely on the same mechanisms operating universally throughout the brain’s hierarchical generative model. However, while the explanation of movement production and sensory attenuation in terms of low-level attentional modulation is quite elegant on the active inference view, there are some questions left open by its extension to higher levels of action control and the accompanying phenomenology of for example volition, effort, or agency. I suggest that conceptual guidance from recent accounts of phenomenal self- and world-modeling may help develop active inference into an interdisciplinary framework for investigating embodied agentive self-experience
Enacting Proprioceptive Predictions in the Rubber Hand Illusion
In the “rubber hand illusion,” the participant sees a displaced fake hand being touched congruently with her unseen real hand. This seems to invoke inference of an “illusory” common cause for visual, tactile, and proprioceptive sensations; as evident from a perceived embodiment of the fake hand and the perception of one’s unseen hand location closer toward the position of the fake hand—the so-named “proprioceptive drift.” Curiously, participants may sometimes move their hand in the direction of the fake hand (Asai, 2015). While this could easily be explained as participants actively trying to align the real and fake hands to experience a stronger illusion, they are not aware of these movements (cf. Abdulkarim and Ehrsson, 2018). So there may be better explanation for this observation than that participants were “cheating.” In their recent article, Lanillos et al. (2021) show that the unintentional execution of arm movement forces during a virtual reality based version of the rubber hand illusion—which the authors call “active drift”—can be reproduced by a computational model based on the active inference framework
A Crucial Role of the Frontal Operculum in Task-Set Dependent Visuomotor Performance Monitoring
For adaptive goal-directed action, the brain needs to monitor action performance and detect errors. The corresponding information may be conveyed via different sensory modalities; for instance, visual and proprioceptive body position cues may inform about current manual action performance. Thereby, contextual factors such as the current task set may also determine the relative importance of each sensory modality for action guidance. Here, we analyzed human behavioral, functional magnetic resonance imaging (fMRI), and magnetoencephalography (MEG) data from two virtual reality-based hand–target phase-matching studies to identify the neuronal correlates of performance monitoring and error processing under instructed visual or proprioceptive task sets. Our main result was a general, modality-independent response of the bilateral frontal operculum (FO) to poor phase-matching accuracy, as evident from increased BOLD signal and increased source-localized gamma power. Furthermore, functional connectivity of the bilateral FO to the right posterior parietal cortex (PPC) increased under a visual versus proprioceptive task set. These findings suggest that the bilateral FO generally monitors manual action performance; and, moreover, that when visual action feedback is used to guide action, the FO may signal an increased need for control to visuomotor regions in the right PPC following errors
Minimal self-models and the free energy principle
The term “minimal phenomenal selfhood” (MPS) describes the basic, pre-reflective experience of being a self (Blanke and Metzinger, 2009). Theoretical accounts of the minimal self have long recognized the importance and the ambivalence of the body as both part of the physical world, and the enabling condition for being in this world (Gallagher, 2005a; Grafton, 2009). A recent account of MPS (Metzinger, 2004a) centers on the consideration that minimal selfhood emerges as the result of basic self-modeling mechanisms, thereby being founded on pre-reflective bodily processes. The free energy principle (FEP; Friston, 2010) is a novel unified theory of cortical function built upon the imperative that self-organizing systems entail hierarchical generative models of the causes of their sensory input, which are optimized by minimizing free energy as an approximation of the log-likelihood of the model. The implementation of the FEP via predictive coding mechanisms and in particular the active inference principle emphasizes the role of embodiment for predictive self-modeling, which has been appreciated in recent publications. In this review, we provide an overview of these conceptions and illustrate thereby the potential power of the FEP in explaining the mechanisms underlying minimal selfhood and its key constituents, multisensory integration, interoception, agency, perspective, and the experience of mineness. We conclude that the conceptualization of MPS can be well mapped onto a hierarchical generative model furnished by the FEP and may constitute the basis for higher-level, cognitive forms of self-referral, as well as the understanding of other minds.Peer Reviewe
Attentional Modulation of Vision Versus Proprioception During Action
To control our actions efficiently, our brain represents our body based on a combination of visual and proprioceptive cues,
weighted according to how (un)reliable—how precise—each respective modality is in a given context. However, perceptual
experiments in other modalities suggest that the weights assigned to sensory cues are also modulated “top-down” by
attention. Here, we asked whether during action, attention can likewise modulate the weights (i.e., precision) assigned to
visual versus proprioceptive information about body position. Participants controlled a virtual hand (VH) via a data glove,
matching either the VH or their (unseen) real hand (RH) movements to a target, and thus adopting a “visual” or
“proprioceptive” attentional set, under varying levels of visuo-proprioceptive congruence and visibility. Functional magnetic
resonance imaging (fMRI) revealed increased activation of the multisensory superior parietal lobe (SPL) during the VH task
and increased activation of the secondary somatosensory cortex (S2) during the RH task. Dynamic causal modeling (DCM)
showed that these activity changes were the result of selective, diametrical gain modulations in the primary visual cortex
(V1) and the S2. These results suggest that endogenous attention can balance the gain of visual versus proprioceptive brain
areas, thus contextualizing their inf luence on multisensory areas representing the body for action
Active inference under visuo-proprioceptive conflict: Simulation and empirical results
It has been suggested that the brain controls hand movements via internal models that rely on visual and proprioceptive cues about the state of the hand. In active inference formulations of such models, the relative influence of each modality on action and perception is determined by how precise (reliable) it is expected to be. The 'top-down' affordance of expected precision to a particular sensory modality is associated with attention. Here, we asked whether increasing attention to (i.e., the precision of) vision or proprioception would enhance performance in a hand-target phase matching task, in which visual and proprioceptive cues about hand posture were incongruent. We show that in a simple simulated agent-based on predictive coding formulations of active inference-increasing the expected precision of vision or proprioception improved task performance (target matching with the seen or felt hand, respectively) under visuo-proprioceptive conflict. Moreover, we show that this formulation captured the behaviour and self-reported attentional allocation of human participants performing the same task in a virtual reality environment. Together, our results show that selective attention can balance the impact of (conflicting) visual and proprioceptive cues on action-rendering attention a key mechanism for a flexible body representation for action
Cortical beta oscillations reflect the contextual gating of visual action feedback
In sensorimotor integration, the brain needs to decide how its predictions should accommodate novel evidence by 'gating' sensory data depending on the current context. Here, we examined the oscillatory correlates of this process by recording magnetoencephalography (MEG) data during a new task requiring action under intersensory conflict. We used virtual reality to decouple visual (virtual) and proprioceptive (real) hand postures during a task in which the phase of grasping movements tracked a target (in either modality). Thus, we rendered visual information either task-relevant or a (to-be-ignored) distractor. Under visuo-proprioceptive incongruence, occipital beta power decreased (relative to congruence) when vision was task-relevant but increased when it had to be ignored. Dynamic causal modelling (DCM) revealed that this interaction was best explained by diametrical, task-dependent changes in visual gain. These novel results suggest a crucial role for beta oscillations in the contextual gating (i.e., gain or precision control) of visual vs proprioceptive action feedback, depending on concurrent behavioral demands
- …
