21,723 research outputs found

    Multisensory integration across exteroceptive and interoceptive domains modulates self-experience in the rubber-hand illusion

    Get PDF
    Identifying with a body is central to being a conscious self. The now classic “rubber hand illusion” demonstrates that the experience of body ownership can be modulated by manipulating the timing of exteroceptive(visual and tactile)body-related feedback. Moreover,the strength of this modulation is related to individual differences in sensitivity to internal bodily signals(interoception). However the interaction of exteroceptive and interoceptive signals in determining the experience of body-ownership within an individual remains poorly understood.Here, we demonstrate that this depends on the online integration of exteroceptive and interoceptive signals by implementing an innovative “cardiac rubber hand illusion” that combined computer-generated augmented-reality with feedback of interoceptive (cardiac) information. We show that both subjective and objective measures of virtual-hand ownership are enhanced by cardio-visual feedback in-time with the actual heartbeat,as compared to asynchronous feedback. We further show that these measures correlate with individual differences in interoceptive sensitivity,and are also modulated by the integration of proprioceptive signals instantiated using real-time visual remapping of finger movements to the virtual hand.Our results demonstrate that interoceptive signals directly influence the experience of body ownership via multisensory integration,and they lend support to models of conscious selfhood based on interoceptive predictive coding

    Cortical Dynamics of Contextually-Cued Attentive Visual Learning and Search: Spatial and Object Evidence Accumulation

    Full text link
    How do humans use predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, a certain combination of objects can define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. A neural model, ARTSCENE Search, is developed to illustrate the neural mechanisms of such memory-based contextual learning and guidance, and to explain challenging behavioral data on positive/negative, spatial/object, and local/distant global cueing effects during visual search. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined by enhancing target-like objects in space as a scene is scanned with saccadic eye movements. The model clarifies the functional roles of neuroanatomical, neurophysiological, and neuroimaging data in visual search for a desired goal object. In particular, the model simulates the interactive dynamics of spatial and object contextual cueing in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortical cells (area 46) prime possible target locations in posterior parietal cortex based on goalmodulated percepts of spatial scene gist represented in parahippocampal cortex, whereas model ventral prefrontal cortical cells (area 47/12) prime possible target object representations in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex. The model hereby predicts how the cortical What and Where streams cooperate during scene perception, learning, and memory to accumulate evidence over time to drive efficient visual search of familiar scenes.CELEST, an NSF Science of Learning Center (SBE-0354378); SyNAPSE program of Defense Advanced Research Projects Agency (HR0011-09-3-0001, HR0011-09-C-0011

    Interoceptive inference, emotion, and the embodied self

    Get PDF
    The concept of the brain as a prediction machine has enjoyed a resurgence in the context of the Bayesian brain and predictive coding approaches within cognitive science. To date, this perspective has been applied primarily to exteroceptive perception (e.g., vision, audition), and action. Here, I describe a predictive, inferential perspective on interoception: ‘interoceptive inference’ conceives of subjective feeling states (emotions) as arising from actively-inferred generative (predictive) models of the causes of interoceptive afferents. The model generalizes ‘appraisal’ theories that view emotions as emerging from cognitive evaluations of physiological changes, and it sheds new light on the neurocognitive mechanisms that underlie the experience of body ownership and conscious selfhood in health and in neuropsychiatric illness

    An interoceptive predictive coding model of conscious presence

    Get PDF
    We describe a theoretical model of the neurocognitive mechanisms underlying conscious presence and its disturbances. The model is based on interoceptive prediction error and is informed by predictive models of agency, general models of hierarchical predictive coding and dopaminergic signaling in cortex, the role of the anterior insular cortex (AIC) in interoception and emotion, and cognitive neuroscience evidence from studies of virtual reality and of psychiatric disorders of presence, specifically depersonalization/derealization disorder. The model associates presence with successful suppression by top-down predictions of informative interoceptive signals evoked by autonomic control signals and, indirectly, by visceral responses to afferent sensory signals. The model connects presence to agency by allowing that predicted interoceptive signals will depend on whether afferent sensory signals are determined, by a parallel predictive-coding mechanism, to be self-generated or externally caused. Anatomically, we identify the AIC as the likely locus of key neural comparator mechanisms. Our model integrates a broad range of previously disparate evidence, makes predictions for conjoint manipulations of agency and presence, offers a new view of emotion as interoceptive inference, and represents a step toward a mechanistic account of a fundamental phenomenological property of consciousness

    Contextual modulation of primary visual cortex by auditory signals

    Get PDF
    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195–201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256–1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame. This article is part of the themed issue ‘Auditory and visual scene analysis’

    Attentional Enhancement of Auditory Mismatch Responses: a DCM/MEG Study.

    Get PDF
    Despite similar behavioral effects, attention and expectation influence evoked responses differently: Attention typically enhances event-related responses, whereas expectation reduces them. This dissociation has been reconciled under predictive coding, where prediction errors are weighted by precision associated with attentional modulation. Here, we tested the predictive coding account of attention and expectation using magnetoencephalography and modeling. Temporal attention and sensory expectation were orthogonally manipulated in an auditory mismatch paradigm, revealing opposing effects on evoked response amplitude. Mismatch negativity (MMN) was enhanced by attention, speaking against its supposedly pre-attentive nature. This interaction effect was modeled in a canonical microcircuit using dynamic causal modeling, comparing models with modulation of extrinsic and intrinsic connectivity at different levels of the auditory hierarchy. While MMN was explained by recursive interplay of sensory predictions and prediction errors, attention was linked to the gain of inhibitory interneurons, consistent with its modulation of sensory precision

    Acetylcholine neuromodulation in normal and abnormal learning and memory: vigilance control in waking, sleep, autism, amnesia, and Alzheimer's disease

    Get PDF
    This article provides a unified mechanistic neural explanation of how learning, recognition, and cognition break down during Alzheimer's disease, medial temporal amnesia, and autism. It also clarifies whey there are often sleep disturbances during these disorders. A key mechanism is how acetylcholine modules vigilance control in cortical layer

    Which way do I go? Neural activation in response to feedback and spatial processing in a virtual T-maze

    No full text
    In 2 human event-related brain potential (ERP) experiments, we examined the feedback error-related negativity (fERN), an ERP component associated with reward processing by the midbrain dopamine system, and the N170, an ERP component thought to be generated by the medial temporal lobe (MTL), to investigate the contributions of these neural systems toward learning to find rewards in a "virtual T-maze" environment. We found that feedback indicating the absence versus presence of a reward differentially modulated fERN amplitude, but only when the outcome was not predicted by an earlier stimulus. By contrast, when a cue predicted the reward outcome, then the predictive cue (and not the feedback) differentially modulated fERN amplitude. We further found that the spatial location of the feedback stimuli elicited a large N170 at electrode sites sensitive to right MTL activation and that the latency of this component was sensitive to the spatial location of the reward, occurring slightly earlier for rewards following a right versus left turn in the maze. Taken together, these results confirm a fundamental prediction of a dopamine theory of the fERN and suggest that the dopamine and MTL systems may interact in navigational learning tasks
    corecore