44 research outputs found
Spectral Modes of Network Dynamics Reveal Increased Informational Complexity Near Criticality
What does the informational complexity of dynamical networked systems tell us
about intrinsic mechanisms and functions of these complex systems? Recent
complexity measures such as integrated information have sought to
operationalize this problem taking a whole-versus-parts perspective, wherein
one explicitly computes the amount of information generated by a network as a
whole over and above that generated by the sum of its parts during state
transitions. While several numerical schemes for estimating network integrated
information exist, it is instructive to pursue an analytic approach that
computes integrated information as a function of network weights. Our
formulation of integrated information uses a Kullback-Leibler divergence
between the multi-variate distribution on the set of network states versus the
corresponding factorized distribution over its parts. Implementing stochastic
Gaussian dynamics, we perform computations for several prototypical network
topologies. Our findings show increased informational complexity near
criticality, which remains consistent across network topologies. Spectral
decomposition of the system's dynamics reveals how informational complexity is
governed by eigenmodes of both, the network's covariance and adjacency
matrices. We find that as the dynamics of the system approach criticality, high
integrated information is exclusively driven by the eigenmode corresponding to
the leading eigenvalue of the covariance matrix, while sub-leading modes get
suppressed. The implication of this result is that it might be favorable for
complex dynamical networked systems such as the human brain or communication
systems to operate near criticality so that efficient information integration
might be achieved
Dynamical noise can enhance high-order statistical structure in complex systems
Recent research has provided a wealth of evidence highlighting the pivotal
role of high-order interdependencies in supporting the information-processing
capabilities of distributed complex systems. These findings may suggest that
high-order interdependencies constitute a powerful resource that is, however,
challenging to harness and can be readily disrupted. In this paper we contest
this perspective by demonstrating that high-order interdependencies can not
only exhibit robustness to stochastic perturbations, but can in fact be
enhanced by them. Using elementary cellular automata as a general testbed, our
results unveil the capacity of dynamical noise to enhance the statistical
regularities between agents and, intriguingly, even alter the prevailing
character of their interdependencies. Furthermore, our results show that these
effects are related to the high-order structure of the local rules, which
affect the system's susceptibility to noise and characteristic times-scales.
These results deepen our understanding of how high-order interdependencies may
spontaneously emerge within distributed systems interacting with stochastic
environments, thus providing an initial step towards elucidating their origin
and function in complex systems like the human brain.Comment: 8 pages, 4 figures, 2 table
The Phi measure of integrated information is not well-defined for general physical systems
According to the integrated information theory of consciousness (IIT), consciousness is a fundamental observer-independent property of physical systems, and the measure Φ (Phi) of integrated information is identical to the quantity or level of consciousness. For this to be plausible, there should be no alternative formulae for Φ consistent with the axioms of IIT, and there should not be cases of Φ being ill-defined. This article presents three ways in which Φ, in its current formulation, fails to meet these standards, and discusses how this problem might be addressed
An operational information decomposition via synergistic disclosure
Abstract: Multivariate information decompositions hold promise to yield insight into complex systems, and stand out for their ability to identify synergistic phenomena. However, the adoption of these approaches has been hindered by there being multiple possible decompositions, and no precise guidance for preferring one over the others. At the heart of this disagreement lies the absence of a clear operational interpretation of what synergistic information is. Here we fill this gap by proposing a new information decomposition based on a novel operationalisation of informational synergy, which leverages recent developments in the literature of data privacy. Our decomposition is defined for any number of information sources, and its atoms can be calculated using elementary optimisation techniques. The decomposition provides a natural coarse-graining that scales gracefully with the system’s size, and is applicable in a wide range of scenarios of practical interest
Synergistic information supports modality integration and flexible learning in neural networks solving multiple tasks
Striking progress has been made in understanding cognition by analyzing how the brain is engaged in different modes of information processing. For instance, so-called synergistic information (information encoded by a set of neurons but not by any subset) plays a key role in areas of the human brain linked with complex cognition. However, two questions remain unanswered: (a) how and why a cognitive system can become highly synergistic; and (b) how informational states map onto artificial neural networks in various learning modes. Here we employ an information-decomposition framework to investigate neural networks performing cognitive tasks. Our results show that synergy increases as networks learn multiple diverse tasks, and that in tasks requiring integration of multiple sources, performance critically relies on synergistic neurons. Overall, our results suggest that synergy is used to combine information from multiple modalities—and more generally for flexible and efficient learning. These findings reveal new ways of investigating how and why learning systems employ specific information-processing strategies, and support the principle that the capacity for general-purpose learning critically relies on the system’s information dynamics