9 research outputs found

    Efficient Algorithms for Searching the Minimum Information Partition in Integrated Information Theory

    Full text link
    The ability to integrate information in the brain is considered to be an essential property for cognition and consciousness. Integrated Information Theory (IIT) hypothesizes that the amount of integrated information (Φ\Phi) in the brain is related to the level of consciousness. IIT proposes that to quantify information integration in a system as a whole, integrated information should be measured across the partition of the system at which information loss caused by partitioning is minimized, called the Minimum Information Partition (MIP). The computational cost for exhaustively searching for the MIP grows exponentially with system size, making it difficult to apply IIT to real neural data. It has been previously shown that if a measure of Φ\Phi satisfies a mathematical property, submodularity, the MIP can be found in a polynomial order by an optimization algorithm. However, although the first version of Φ\Phi is submodular, the later versions are not. In this study, we empirically explore to what extent the algorithm can be applied to the non-submodular measures of Φ\Phi by evaluating the accuracy of the algorithm in simulated data and real neural data. We find that the algorithm identifies the MIP in a nearly perfect manner even for the non-submodular measures. Our results show that the algorithm allows us to measure Φ\Phi in large systems within a practical amount of time

    Evolving higher-order synergies reveals a trade-off between stability and information integration capacity in complex systems

    Full text link
    There has recently been an explosion of interest in how "higher-order" structures emerge in complex systems. This "emergent" organization has been found in a variety of natural and artificial systems, although at present the field lacks a unified understanding of what the consequences of higher-order synergies and redundancies are for systems. Typical research treat the presence (or absence) of synergistic information as a dependent variable and report changes in the level of synergy in response to some change in the system. Here, we attempt to flip the script: rather than treating higher-order information as a dependent variable, we use evolutionary optimization to evolve boolean networks with significant higher-order redundancies, synergies, or statistical complexity. We then analyse these evolved populations of networks using established tools for characterizing discrete dynamics: the number of attractors, average transient length, and Derrida coefficient. We also assess the capacity of the systems to integrate information. We find that high-synergy systems are unstable and chaotic, but with a high capacity to integrate information. In contrast, evolved redundant systems are extremely stable, but have negligible capacity to integrate information. Finally, the complex systems that balance integration and segregation (known as Tononi-Sporns-Edelman complexity) show features of both chaosticity and stability, with a greater capacity to integrate information than the redundant systems while being more stable than the random and synergistic systems. We conclude that there may be a fundamental trade-off between the robustness of a systems dynamics and its capacity to integrate information (which inherently requires flexibility and sensitivity), and that certain kinds of complexity naturally balance this trade-off

    Brain information processing capacity modeling

    Get PDF
    Neurophysiological measurements suggest that human information processing is evinced by neuronal activity. However, the quantitative relationship between the activity of a brain region and its information processing capacity remains unclear. We introduce and validate a mathematical model of the information processing capacity of a brain region in terms of neuronal activity, input storage capacity, and the arrival rate of afferent information. We applied the model to fMRI data obtained from a flanker paradigm in young and old subjects. Our analysis showed that-for a given cognitive task and subject-higher information processing capacity leads to lower neuronal activity and faster responses. Crucially, processing capacity-as estimated from fMRI data-predicted task and age-related differences in reaction times, speaking to the model's predictive validity. This model offers a framework for modelling of brain dynamics in terms of information processing capacity, and may be exploited for studies of predictive coding and Bayes-optimal decision-making

    Expanding the discussion: revision of the fundamental assumptions framing the study of the neural correlates of consciousness

    Get PDF
    The way one asks a question is shaped by a-priori assumptions and constrains the range of possible answers. We identify and test the assumptions underlying contemporary debates, models, and methodology in the study of the neural correlates of consciousness, which was framed by Crick and Koch's seminal paper (1990). These premises create a sequential and passive conception of conscious perception: it is considered the product of resolved information processing by unconscious mechanisms, produced by a singular event in time and place representing the moment of entry. The conscious percept produced is then automatically retained to be utilized by post-conscious mechanisms. Major debates in the field, such as concern the moment of entry, the all-or-none vs graded nature, and report vs no-report paradigms, are driven by the consensus on these assumptions. We show how removing these assumptions can resolve some of the debates and challenges and prompt additional questions. The potential non-sequential nature of perception suggests new ways of thinking about consciousness as a dynamic and dispersed process, and in turn about the relationship between conscious and unconscious perception. Moreover, it allows us to present a parsimonious account for conscious perception while addressing more aspects of the phenomenon

    Whole-Brain Models to Explore Altered States of Consciousness from the Bottom Up.

    Get PDF
    The scope of human consciousness includes states departing from what most of us experience as ordinary wakefulness. These altered states of consciousness constitute a prime opportunity to study how global changes in brain activity relate to different varieties of subjective experience. We consider the problem of explaining how global signatures of altered consciousness arise from the interplay between large-scale connectivity and local dynamical rules that can be traced to known properties of neural tissue. For this purpose, we advocate a research program aimed at bridging the gap between bottom-up generative models of whole-brain activity and the top-down signatures proposed by theories of consciousness. Throughout this paper, we define altered states of consciousness, discuss relevant signatures of consciousness observed in brain activity, and introduce whole-brain models to explore the biophysics of altered consciousness from the bottom-up. We discuss the potential of our proposal in view of the current state of the art, give specific examples of how this research agenda might play out, and emphasize how a systematic investigation of altered states of consciousness via bottom-up modeling may help us better understand the biophysical, informational, and dynamical underpinnings of consciousness

    Integrated information theory in complex neural systems

    Get PDF
    This thesis concerns Integrated Information Theory (IIT), a branch of information theory aimed at providing a fundamental theory of consciousness. At its core, lie two powerful intuitions: • That a system that is somehow more than the sum of its parts has non-zero integrated information, Φ; and • That a system with non-zero integrated information is conscious. The audacity of IIT’s claims about consciousness has (understandably) sparked vigorous criticism, and experimental evidence for IIT as a theory of consciousness remains scarce and indirect. Nevertheless, I argue that IIT still has merits as a theory of informational complexity within complexity science, leaving aside all claims about consciousness. In my work I follow this broad line of reasoning: showcasing applications where IIT yields rich analyses of complex systems, while critically examining its merits and limitations as a theory of consciousness. This thesis is divided in three parts. First, I describe three example applications of IIT to complex systems from the computational neuroscience literature (coupled oscillators, spiking neurons, and cellular automata), and develop novel Φ estimators to extend IIT’s range of applicability. Second, I show two important limitations of current IIT: that its axiomatic foundation is not specific enough to determine a unique measure of integrated information; and that available measures do not behave as predicted by the theory when applied to neurophysiological data. Finally, I present new theoretical developments aimed at alleviating some of IIT’s flaws. These are based on the concepts of partial information decomposition and lead to a unification of both theories, Integrated Information Decomposition, or ΦID. The thesis concludes with two experimental studies on M/EEG data, showing that a much simpler informational theory of consciousness – the entropic brain hypothesis – can yield valuable insight without the mathematical challenges brought by IIT.Open Acces

    The Role of Information in Consciousness

    Get PDF
    This article comprehensively examines how information processing relates to attention and consciousness. We argue that no current theoretical framework investigating consciousness has a satisfactory and holistic account of their informational relationship. Our key theoretical contribution is showing how the dissociation between consciousness and attention must be understood in informational terms in order to make the debate scientifically sound. No current theories clarify the difference between attention and consciousness in terms of information. We conclude with two proposals to advance the debate. First, neurobiological homeostatic processes need to be more explicitly associated with conscious information processing, since information processed through attention is algorithmic, rather than being homeostatic. Second, to understand subjectivity in informational terms, we must define information uniqueness in consciousness (e.g., irreproducible information, biologically encrypted information). These approaches could help cognitive scientists better understand conflicting accounts of the neural correlates of consciousness and work toward a more unified theoretical framework

    Individual variability in value-based decision making: behavior, cognition, and functional brain topography

    Get PDF
    Decisions often require weighing the costs and benefits of available prospects. Value-based decision making depends on the coordination of multiple cognitive faculties, making it potentially susceptible to at least two forms of variability. First, there is heterogeneity in brain organization across individuals in areas of association cortex that exhibit decision-related activity. Second, a person’s preferences can fluctuate even for repetitive decision scenarios. Using functional magnetic resonance imaging (fMRI) and behavioral experiments in humans, this project explored how these distinct sources of variability impact choice evaluation, localization of valuation in the brain, and the links between valuation and other cognitive phenomena. Group-level findings suggest that valuation processes share a neural representation with the “default network” (DN) in medial prefrontal cortex (mPFC) and posterior cingulate cortex (PCC). Study 1 examined brain network variability in an open dataset of resting-state fMRI (n=100) by quantitatively testing the hypothesis that the spatial layout of the DN is unique to each person. Functional network topography was well-aligned across individuals in PCC, but highly idiosyncratic in mPFC. These results highlighted that the apparent overlap of cognitive functions in these areas should be evaluated within individuals. Study 2 examined variability in the integration of rewards with subjective costs of time and effort. Two computerized behavioral experiments (total n=132) tested how accept-or-reject foraging decisions were influenced by demands for physical effort, cognitive effort, and unfilled delay. The results showed that people’s willingness to incur the three types of costs differed when they experienced a single type of demand, but gradually converged when all three were interleaved. The results could be accounted for by a computational model in which contextual factors altered the perceived cost of temporal delay. Finally, Study 3 asked whether the apparent cortical overlap between valuation effects and the DN persisted after accounting for individual variability in brain topography and behavior. Using fMRI scans designed to evoke valuation and DN-like effects (n=18), we reproduced the idiosyncratic network topography from Study 1, and observed valuation-related effects in individually identified DN regions. Collectively, these findings advance our taxonomic understanding of higher-order cognitive processes, suggesting that seemingly dissimilar valuation and DN-related functions engage overlapping cortical mechanisms

    Information integration in large brain networks.

    No full text
    An outstanding problem in neuroscience is to understand how information is integrated across the many modules of the brain. While classic information-theoretic measures have transformed our understanding of feedforward information processing in the brain's sensory periphery, comparable measures for information flow in the massively recurrent networks of the rest of the brain have been lacking. To address this, recent work in information theory has produced a sound measure of network-wide "integrated information", which can be estimated from time-series data. But, a computational hurdle has stymied attempts to measure large-scale information integration in real brains. Specifically, the measurement of integrated information involves a combinatorial search for the informational "weakest link" of a network, a process whose computation time explodes super-exponentially with network size. Here, we show that spectral clustering, applied on the correlation matrix of time-series data, provides an approximate but robust solution to the search for the informational weakest link of large networks. This reduces the computation time for integrated information in large systems from longer than the lifespan of the universe to just minutes. We evaluate this solution in brain-like systems of coupled oscillators as well as in high-density electrocortigraphy data from two macaque monkeys, and show that the informational "weakest link" of the monkey cortex splits posterior sensory areas from anterior association areas. Finally, we use our solution to provide evidence in support of the long-standing hypothesis that information integration is maximized by networks with a high global efficiency, and that modular network structures promote the segregation of information
    corecore