514 research outputs found
Consciousness and the prefrontal parietal network: insights from attention, working memory, and chunking
Consciousness has of late become a “hot topic” in neuroscience. Empirical work has centered on identifying potential neural correlates of consciousness (NCCs), with a converging view that the prefrontal parietal network (PPN) is closely associated with this process. Theoretical work has primarily sought to explain how informational properties of this cortical network could account for phenomenal properties of consciousness. However, both empirical and theoretical research has given less focus to the psychological features that may account for the NCCs. The PPN has also been heavily linked with cognitive processes, such as attention. We describe how this literature is under-appreciated in consciousness science, in part due to the increasingly entrenched assumption of a strong dissociation between attention and consciousness. We argue instead that there is more common ground between attention and consciousness than is usually emphasized: although objects can under certain circumstances be attended to in the absence of conscious access, attention as a content selection and boosting mechanism is an important and necessary aspect of consciousness. Like attention, working memory and executive control involve the interlinking of multiple mental objects and have also been closely associated with the PPN. We propose that this set of cognitive functions, in concert with attention, make up the core psychological components of consciousness. One related process, chunking, exploits logical or mnemonic redundancies in a dataset so that it can be recoded and a given task optimized. Chunking has been shown to activate PPN particularly robustly, even compared with other cognitively demanding tasks, such as working memory or mental arithmetic. It is therefore possible that chunking, as a tool to detect useful patterns within an integrated set of intensely processed (attended) information, has a central role to play in consciousness. Following on from this, we suggest that a key evolutionary purpose of consciousness may be to provide innovative solutions to complex or novel problems
The grand challenge of consciousness
No description supplie
The cybernetic Bayesian brain: from interoceptive inference to sensorimotor contingencies
Is there a single principle by which neural operations can account for perception, cognition, action, and even consciousness? A strong candidate is now taking shape in the form of “predictive processing”. On this theory, brains engage in predictive inference on the causes of sensory inputs by continuous minimization of prediction errors or informational “free energy”. Predictive processing can account, supposedly, not only for perception, but also for action and for the essential contribution of the body and environment in structuring sensorimotor interactions. In this paper I draw together some recent developments within predictive processing that involve predictive modelling of internal physiological states (interoceptive inference), and integration with “enactive” and “embodied” approaches to cognitive science (predictive perception of sensorimotor contingencies). The upshot is a development of predictive processing that originates, not in Helmholtzian perception-as-inference, but rather in 20th-century cybernetic principles that emphasized homeostasis and predictive control. This way of thinking leads to (i) a new view of emotion as active interoceptive inference; (ii) a common predictive framework linking experiences of body ownership, emotion, and exteroceptive perception; (iii) distinct interpretations of active inference as involving disruptive and disambiguatory—not just confirmatory—actions to test perceptual hypotheses; (iv) a neurocognitive operationalization of the “mastery of sensorimotor contingencies” (where sensorimotor contingencies reflect the rules governing sensory changes produced by various actions); and (v) an account of the sense of subjective reality of perceptual contents (“perceptual presence”) in terms of the extent to which predictive models encode potential sensorimotor relations (this being “counterfactual richness”). This is rich and varied territory, and surveying its landmarks emphasizes the need for experimental tests of its key contributions
Modes and models in disorders of consciousness science
The clinical assessment of non-communicative brain damaged patients is extremely difficult and there is a need for paraclinical diagnostic markers of the level of consciousness. In the last few years, progress within neuroimaging has led to a growing body of studies investigating vegetative state and minimally conscious state patients, which can be classified in two main approaches. Active neuroimaging paradigms search for a response to command without requiring a motor response. Passive neuroimaging paradigms investigate spontaneous brain activity and brain responses to external stimuli and aim at identifying neural correlates of consciousness. Other passive paradigms eschew neuroimaging in favour of behavioural markers which reliably distinguish conscious and unconscious conditions in healthy controls. In order to furnish accurate diagnostic criteria, a mechanistic explanation of how the brain gives rise to consciousness seems desirable. Mechanistic and theoretical approaches could also ultimately lead to a unification of passive and active paradigms in a coherent diagnostic approach. In this paper, we survey current passive and active paradigms available for diagnosis of residual consciousness in vegetative state and minimally conscious patients. We then review the current main theories of consciousness and see how they can apply in this context. Finally, we discuss some avenues for future research in this domai
An interoceptive predictive coding model of conscious presence
We describe a theoretical model of the neurocognitive mechanisms underlying conscious presence and its disturbances. The model is based on interoceptive prediction error and is informed by predictive models of agency, general models of hierarchical predictive coding and dopaminergic signaling in cortex, the role of the anterior insular cortex (AIC) in interoception and emotion, and cognitive neuroscience evidence from studies of virtual reality and of psychiatric disorders of presence, specifically depersonalization/derealization disorder. The model associates presence with successful suppression by top-down predictions of informative interoceptive signals evoked by autonomic control signals and, indirectly, by visceral responses to afferent sensory signals. The model connects presence to agency by allowing that predicted interoceptive signals will depend on whether afferent sensory signals are determined, by a parallel predictive-coding mechanism, to be self-generated or externally caused. Anatomically, we identify the AIC as the likely locus of key neural comparator mechanisms. Our model integrates a broad range of previously disparate evidence, makes predictions for conjoint manipulations of agency and presence, offers a new view of emotion as interoceptive inference, and represents a step toward a mechanistic account of a fundamental phenomenological property of consciousness
Inference to the best prediction : a reply to Wanja Wiese
Responding to Wanja Wiese’s incisive commentary, I first develop the analogy between predictive processing and scientific discovery. Active inference in the Bayesian brain turns out to be well characterized by abduction (inference to the best explanation), rather than by deduction or induction. Furthermore, the emphasis on control highlighted by cybernetics suggests that active inference can be a process of “inference to the best prediction”, leading to a distinction between “epistemic” and “instrumental” active inference. Secondly, on the relationship between perceptual presence and objecthood, I recognize a distinction between the “world revealing” presence of phenomenological objecthood, and the experience of “absence of presence” or “phenomenal unreality”. Here I propose that world-revealing presence (objecthood) depends on counterfactually rich predictive models that are necessarily hierarchically deep, whereas phenomenal unreality arises when active inference fails to unmix causes “in the world” from those that depend on the perceiver. Finally, I return to control-oriented active inference in the setting of interoception, where cybernetics and predictive processing are most closely connected
Learning action-oriented models through active inference
Converging theories suggest that organisms learn and exploit probabilistic models of their environment. However, it remains unclear how such models can be learned in practice. The open-ended complexity of natural environments means that it is generally infeasible for organisms to model their environment comprehensively. Alternatively, action-oriented models attempt to encode a parsimonious representation of adaptive agent-environment interactions. One approach to learning action-oriented models is to learn online in the presence of goal-directed behaviours. This constrains an agent to behaviourally relevant trajectories, reducing the diversity of the data a model need account for. Unfortunately, this approach can cause models to prematurely converge to sub-optimal solutions, through a process we refer to as a bad-bootstrap. Here, we exploit the normative framework of active inference to show that efficient action-oriented models can be learned by balancing goal-oriented and epistemic (information-seeking) behaviours in a principled manner. We illustrate our approach using a simple agent-based model of bacterial chemotaxis. We first demonstrate that learning via goal-directed behaviour indeed constrains models to behaviorally relevant aspects of the environment, but that this approach is prone to sub-optimal convergence. We then demonstrate that epistemic behaviours facilitate the construction of accurate and comprehensive models, but that these models are not tailored to any specific behavioural niche and are therefore less efficient in their use of data. Finally, we show that active inference agents learn models that are parsimonious, tailored to action, and which avoid bad bootstraps and sub-optimal convergence. Critically, our results indicate that models learned through active inference can support adaptive behaviour in spite of, and indeed because of, their departure from veridical representations of the environment. Our approach provides a principled method for learning adaptive models from limited interactions with an environment, highlighting a route to sample efficient learning algorithms
Granger causality analysis in neuroscience and neuroimaging
No description supplie
- …