19 research outputs found

    Measuring integrated information from the decoding perspective

    Full text link
    Accumulating evidence indicates that the capacity to integrate information in the brain is a prerequisite for consciousness. Integrated Information Theory (IIT) of consciousness provides a mathematical approach to quantifying the information integrated in a system, called integrated information, Φ\Phi. Integrated information is defined theoretically as the amount of information a system generates as a whole, above and beyond the sum of the amount of information its parts independently generate. IIT predicts that the amount of integrated information in the brain should reflect levels of consciousness. Empirical evaluation of this theory requires computing integrated information from neural data acquired from experiments, although difficulties with using the original measure Φ\Phi precludes such computations. Although some practical measures have been previously proposed, we found that these measures fail to satisfy the theoretical requirements as a measure of integrated information. Measures of integrated information should satisfy the lower and upper bounds as follows: The lower bound of integrated information should be 0 when the system does not generate information (no information) or when the system comprises independent parts (no integration). The upper bound of integrated information is the amount of information generated by the whole system and is realized when the amount of information generated independently by its parts equals to 0. Here we derive the novel practical measure Φ\Phi^* by introducing a concept of mismatched decoding developed from information theory. We show that Φ\Phi^* is properly bounded from below and above, as required, as a measure of integrated information. We derive the analytical expression Φ\Phi^* under the Gaussian assumption, which makes it readily applicable to experimental data

    Fast and exact search for the partition with minimal information loss

    Full text link
    In analysis of multi-component complex systems, such as neural systems, identifying groups of units that share similar functionality will aid understanding of the underlying structures of the system. To find such a grouping, it is useful to evaluate to what extent the units of the system are separable. Separability or inseparability can be evaluated by quantifying how much information would be lost if the system were partitioned into subsystems, and the interactions between the subsystems were hypothetically removed. A system of two independent subsystems are completely separable without any loss of information while a system of strongly interacted subsystems cannot be separated without a large loss of information. Among all the possible partitions of a system, the partition that minimizes the loss of information, called the Minimum Information Partition (MIP), can be considered as the optimal partition for characterizing the underlying structures of the system. Although the MIP would reveal novel characteristics of the neural system, an exhaustive search for the MIP is numerically intractable due to the combinatorial explosion of possible partitions. Here, we propose a computationally efficient search to precisely identify the MIP among all possible partitions by exploiting the submodularity of the measure of information loss. Mutual information is one such submodular information loss functions, and is a natural choice for measuring the degree of statistical dependence between paired sets of random variables. By using mutual information as a loss function, we show that the search for MIP can be performed in a practical order of computational time for a reasonably large system. We also demonstrate that MIP search allows for the detection of underlying global structures in a network of nonlinear oscillators

    “What is it like to be a bat?”—a pathway to the answer from the integrated information theory

    Get PDF
    What does it feel like to be a bat? Is conscious experience of echolocation closer to that of vision or audition? Or do bats process echolocation nonconsciously, such that they do not feel anything about echolocation? This famous question of bats' experience, posed by a philosopher Thomas Nagel in 1974, clarifies the difficult nature of the mind–body problem. Why a particular sense, such as vision, has to feel like vision, but not like audition, is totally puzzling. This is especially so given that any conscious experience is supported by neuronal activity. Activity of a single neuron appears fairly uniform across modalities and even similar to those for non-conscious processing. Without any explanation on why a particular sense has to feel the way it does, researchers cannot approach the question of the bats' experience. Is there any theory that gives us a hope for such explanation? Currently, probably none, except for one. Integrated information theory has potential to offer a plausible explanation. IIT essentially claims that any system that is composed of causally interacting mechanisms can have conscious experience. And precisely how the system feels is determined by the way the mechanisms influence each other in a holistic way. In this article, I will give a brief explanation of the essence of IIT. Further, I will briefly provide a potential scientific pathway to approach bats' conscious experience and its philosophical implications. If IIT, or its improved or related versions, is validated enough, the theory will gain credibility. When it matures enough, predictions from the theory, including nature of bats' experience, will have to be accepted. I argue that a seemingly impossible question about bats' consciousness will drive empirical and theoretical consciousness research to make big breakthroughs, in a similar way as an impossible question about the age of the universe has driven modern cosmology

    Comparing Information-Theoretic Measures of Complexity in Boltzmann Machines

    Get PDF
    In the past three decades, many theoretical measures of complexity have been proposed to help understand complex systems. In this work, for the first time, we place these measures on a level playing field, to explore the qualitative similarities and differences between them, and their shortcomings. Specifically, using the Boltzmann machine architecture (a fully connected recurrent neural network) with uniformly distributed weights as our model of study, we numerically measure how complexity changes as a function of network dynamics and network parameters. We apply an extension of one such information-theoretic measure of complexity to understand incremental Hebbian learning in Hopfield networks, a fully recurrent architecture model of autoassociative memory. In the course of Hebbian learning, the total information flow reflects a natural upward trend in complexity as the network attempts to learn more and more patterns.Comment: 16 pages, 7 figures; Appears in Entropy, Special Issue "Information Geometry II

    Efficient Algorithms for Searching the Minimum Information Partition in Integrated Information Theory

    Full text link
    The ability to integrate information in the brain is considered to be an essential property for cognition and consciousness. Integrated Information Theory (IIT) hypothesizes that the amount of integrated information (Φ\Phi) in the brain is related to the level of consciousness. IIT proposes that to quantify information integration in a system as a whole, integrated information should be measured across the partition of the system at which information loss caused by partitioning is minimized, called the Minimum Information Partition (MIP). The computational cost for exhaustively searching for the MIP grows exponentially with system size, making it difficult to apply IIT to real neural data. It has been previously shown that if a measure of Φ\Phi satisfies a mathematical property, submodularity, the MIP can be found in a polynomial order by an optimization algorithm. However, although the first version of Φ\Phi is submodular, the later versions are not. In this study, we empirically explore to what extent the algorithm can be applied to the non-submodular measures of Φ\Phi by evaluating the accuracy of the algorithm in simulated data and real neural data. We find that the algorithm identifies the MIP in a nearly perfect manner even for the non-submodular measures. Our results show that the algorithm allows us to measure Φ\Phi in large systems within a practical amount of time

    Mutual Information in Coupled Double Quantum Dots: A Simple Analytic Model for Potential Artificial Consciousness

    Get PDF
    The integrated information theory is thought to be a key clue towards the theoretical understanding of consciousness. In this study, we propose a simple numerical model comprising a set of coupled double quantum dots, where the disconnection of the elements is represented by the removal of Coulomb interaction between the quantum dots, for the quantitative investigation of integrated information. As a measure of integrated information, we calculate the mutual information in the model system, as the Kullback-Leibler divergence between the connected and disconnected status, through the probability distribution of the electronic states from the master transition-rate equations. We reasonably demonstrate that the increase in the strength of interaction between the quantum dots leads to higher mutual information, owing to the larger divergence in the probability distributions of the electronic states. Our model setup could be a useful basic tool for numerical analyses in the field of integrated information theory.Comment: 10 pages, 6 figure

    Integrated information as a metric for group interaction

    Get PDF
    Researchers in many disciplines have previously used a variety of mathematical techniques for analyzing group interactions. Here we use a new metric for this purpose, called "integrated information" or "phi." Phi was originally developed by neuroscientists as a measure of consciousness in brains, but it captures, in a single mathematical quantity, two properties that are important in many other kinds of groups as well: differentiated information and integration. Here we apply this metric to the activity of three types of groups that involve people and computers. First, we find that 4-person work groups with higher measured phi perform a wide range of tasks more effectively, as measured by their collective intelligence. Next, we find that groups of Wikipedia editors with higher measured phi create higher quality articles. Last, we find that the measured phi of the collection of people and computers communicating on the Internet increased over a recent six-year period. Together, these results suggest that integrated information can be a useful way of characterizing a certain kind of interactional complexity that, at least sometimes, predicts group performance. In this sense, phi can be viewed as a potential metric of effective group collaboration

    Integrated information as a metric for group interaction

    Get PDF
    Researchers in many disciplines have previously used a variety of mathematical techniques for analyzing group interactions. Here we use a new metric for this purpose, called "integrated information" or "phi." Phi was originally developed by neuroscientists as a measure of consciousness in brains, but it captures, in a single mathematical quantity, two properties that are important in many other kinds of groups as well: differentiated information and integration. Here we apply this metric to the activity of three types of groups that involve people and computers. First, we find that 4-person work groups with higher measured phi perform a wide range of tasks more effectively, as measured by their collective intelligence. Next, we find that groups of Wikipedia editors with higher measured phi create higher quality articles. Last, we find that the measured phi of the collection of people and computers communicating on the Internet increased over a recent six-year period. Together, these results suggest that integrated information can be a useful way of characterizing a certain kind of interactional complexity that, at least sometimes, predicts group performance. In this sense, phi can be viewed as a potential metric of effective group collaboration
    corecore