103 research outputs found
Partial information decomposition as a unified approach to the specification of neural goal functions
In many neural systems anatomical motifs are present repeatedly, but despite their structural similarity they can serve very different tasks. A prime example for such a motif is the canonical microcircuit of six-layered neo-cortex, which is repeated across cortical areas, and is involved in a number of different tasks (e.g. sensory, cognitive, or motor tasks). This observation has spawned interest in finding a common underlying principle, a ‘goal function’, of information processing implemented in this structure. By definition such a goal function, if universal, cannot be cast in processing-domain specific language (e.g. ‘edge filtering’, ‘working memory’). Thus, to formulate such a principle, we have to use a domain-independent framework. Information theory offers such a framework. However, while the classical framework of information theory focuses on the relation between one input and one output (Shannon’s mutual information), we argue that neural information processing crucially depends on the combination of multiple inputs to create the output of a processor. To account for this, we use a very recent extension of Shannon Information theory, called partial information decomposition (PID). PID allows to quantify the information that several inputs provide individually (unique information), redundantly (shared information) or only jointly (synergistic information) about the output. First, we review the framework of PID. Then we apply it to reevaluate and analyze several earlier proposals of information theoretic neural goal functions (predictive coding, infomax and coherent infomax, efficient coding). We find that PID allows to compare these goal functions in a common framework, and also provides a versatile approach to design new goal functions from first principles. Building on this, we design and analyze a novel goal function, called ‘coding with synergy’, which builds on combining external input and prior knowledge in a synergistic manner. We suggest that this novel goal function may be highly useful in neural information processing
Self-organization without conservation: Are neuronal avalanches generically critical?
Recent experiments on cortical neural networks have revealed the existence of
well-defined avalanches of electrical activity. Such avalanches have been
claimed to be generically scale-invariant -- i.e. power-law distributed -- with
many exciting implications in Neuroscience. Recently, a self-organized model
has been proposed by Levina, Herrmann and Geisel to justify such an empirical
finding. Given that (i) neural dynamics is dissipative and (ii) there is a
loading mechanism "charging" progressively the background synaptic strength,
this model/dynamics is very similar in spirit to forest-fire and earthquake
models, archetypical examples of non-conserving self-organization, which have
been recently shown to lack true criticality. Here we show that cortical neural
networks obeying (i) and (ii) are not generically critical; unless parameters
are fine tuned, their dynamics is either sub- or super-critical, even if the
pseudo-critical region is relatively broad. This conclusion seems to be in
agreement with the most recent experimental observations. The main implication
of our work is that, if future experimental research on cortical networks were
to support that truly critical avalanches are the norm and not the exception,
then one should look for more elaborate (adaptive/evolutionary) explanations,
beyond simple self-organization, to account for this.Comment: 28 pages, 11 figures, regular pape
Avalanches in self-organized critical neural networks: A minimal model for the neural SOC universality class
The brain keeps its overall dynamics in a corridor of intermediate activity
and it has been a long standing question what possible mechanism could achieve
this task. Mechanisms from the field of statistical physics have long been
suggesting that this homeostasis of brain activity could occur even without a
central regulator, via self-organization on the level of neurons and their
interactions, alone. Such physical mechanisms from the class of self-organized
criticality exhibit characteristic dynamical signatures, similar to seismic
activity related to earthquakes. Measurements of cortex rest activity showed
first signs of dynamical signatures potentially pointing to self-organized
critical dynamics in the brain. Indeed, recent more accurate measurements
allowed for a detailed comparison with scaling theory of non-equilibrium
critical phenomena, proving the existence of criticality in cortex dynamics. We
here compare this new evaluation of cortex activity data to the predictions of
the earliest physics spin model of self-organized critical neural networks. We
find that the model matches with the recent experimental data and its
interpretation in terms of dynamical signatures for criticality in the brain.
The combination of signatures for criticality, power law distributions of
avalanche sizes and durations, as well as a specific scaling relationship
between anomalous exponents, defines a universality class characteristic of the
particular critical phenomenon observed in the neural experiments. The spin
model is a candidate for a minimal model of a self-organized critical adaptive
network for the universality class of neural criticality. As a prototype model,
it provides the background for models that include more biological details, yet
share the same universality class characteristic of the homeostasis of activity
in the brain.Comment: 17 pages, 5 figure
A Measure of the Complexity of Neural Representations based on Partial Information Decomposition
In neural networks, task-relevant information is represented jointly by groups of neurons. However, the specific way in which this mutual information about the classification label is distributed among the individual neurons is not well understood: While parts of it may only be obtainable from specific single neurons, other parts are carried redundantly or synergistically by multiple neurons. We show how Partial Information Decomposition (PID), a recent extension of information theory, can disentangle these different contributions. From this, we introduce the measure of "Representational Complexity", which quantifies the difficulty of accessing information spread across multiple neurons. We show how this complexity is directly computable for smaller layers. For larger layers, we propose subsampling and coarse-graining procedures and prove corresponding bounds on the latter. Empirically, for quantized deep neural networks solving the MNIST and CIFAR10 tasks, we observe that representational complexity decreases both through successive hidden layers and over training, and compare the results to related measures. Overall, we propose representational complexity as a principled and interpretable summary statistic for analyzing the structure and evolution of neural representations and complex systems in general
How contact patterns destabilize and modulate epidemic outbreaks
The spread of a contagious disease clearly depends on when infected individuals come into contact with susceptible ones. Such effects, however, have remained largely unexplored in the study of epidemic outbreaks. In particular, it remains unclear how the timing of contacts interacts with the latent and infectious stages of the disease. Here, we use real-world physical proximity data to study this interaction and find that the temporal statistics of actual human contact patterns (i) destabilize epidemic outbreaks and (ii) modulate the basic reproduction number R (0). We explain both observations by distinct aspects of the observed contact patterns. On the one hand, we find the destabilization of outbreaks to be caused by the temporal clustering of contacts leading to over-dispersed offspring distributions and increased probabilities of otherwise rare events (zero- and super-spreading). Notably, our analysis enables us to disentangle previously elusive sources of over-dispersion in empirical offspring distributions. On the other hand, we find the modulation of R (0) to be caused by a periodically varying contact rate. Both mechanisms are a direct consequence of the memory in contact behavior, and we showcase a generative process that reproduces these non-Markovian statistics. Our results point to the importance of including non-Markovian contact timings into studies of epidemic outbreaks
Coupled infectious disease and behavior dynamics. A review of model assumptions
To comprehend the dynamics of infectious disease transmission, it is imperative to incorporate human protective behavior into models of disease spreading. While models exist for both infectious disease and behavior dynamics independently, the integration of these aspects has yet to yield a cohesive body of literature. Such an integration is crucial for gaining insights into phenomena like the rise of infodemics, the polarization of opinions regarding vaccines, and the dissemination of conspiracy theories during a pandemic. We make a threefold contribution. First, we introduce a framework to describe models coupling infectious disease and behavior dynamics, delineating four distinct update functions. Reviewing existing literature, we highlight a substantial diversity in the implementation of each update function. This variation, coupled with a dearth of model comparisons, renders the literature hardly informative for researchers seeking to develop models tailored to specific populations, infectious diseases, and forms of protection. Second, we advocate an approach to comparing models' assumptions about human behavior, the model aspect characterized by the strongest disagreement. Rather than representing the psychological complexity of decision-making, we show that 'influence-response functions' allow one to identify which model differences generate different disease dynamics and which do not, guiding both model development and empirical research testing model assumptions. Third, we propose recommendations for future modeling endeavors and empirical research aimed at selecting models of coupled infectious disease and behavior dynamics. We underscore the importance of incorporating empirical approaches from the social sciences to propel the literature forward
The challenges of containing SARS-CoV-2 via test-trace-and-isolate
Without a cure, vaccine, or proven long-term immunity against SARS-CoV-2, test-trace-and-isolate (TTI) strategies present a promising tool to contain the viral spread. For any TTI strategy, however, a major challenge arises from pre- and asymptomatic transmission as well as TTI-avoiders, which contribute to hidden, unnoticed infection chains. In our semi-analytical model, we identified two distinct tipping points between controlled and uncontrolled spreading: one, at which the behavior-driven reproduction number of the hidden infections becomes too large to be compensated by the available TTI capabilities, and one at which the number of new infections starts to exceed the tracing capacity, causing a self-accelerating spread. We investigated how these tipping points depend on realistic limitations like limited cooperativity, missing contacts, and imperfect isolation, finding that TTI is likely not sufficient to contain the natural spread of SARS-CoV-2. Therefore, complementary measures like reduced physical contacts and improved hygiene probably remain necessary
Effects of blood viscosity on renal function during cardiopulmonary bypass - investigations in infants and experimental setting in pig kidneys
Autocorrelations from emergent bistability in homeostatic spiking neural networks on neuromorphic hardware
A fruitful approach towards neuromorphic computing is to mimic mechanisms of the brain in physical devices, which has led to successful replication of neuronlike dynamics and learning in the past. However, there remains a large set of neural self-organization mechanisms whose role for neuromorphic computing has yet to be explored. One such mechanism is homeostatic plasticity, which has recently been proposed to play a key role in shaping network dynamics and correlations. Here, we study—from a statistical-physics point of view—the emergent collective dynamics in a homeostatically regulated neuromorphic device that emulates a network of excitatory and inhibitory leaky integrate-and-fire neurons. Importantly, homeostatic plasticity is only active during the training stage and results in a heterogeneous weight distribution that we fix during the analysis stage. We verify the theoretical prediction that reducing the external input in a homeostatically regulated neural network increases temporal correlations, measuring autocorrelation times exceeding 500 ms, despite single-neuron timescales of only 20 ms, both in experiments on neuromorphic hardware and in computer simulations. However, unlike theoretically predicted near-critical fluctuations, we find that temporal correlations can originate from an emergent bistability. We identify this bistability as a fluctuation-induced stochastic switching between metastable active and quiescent states in the vicinity of a nonequilibrium phase transition. Our results thereby constitute a complementary mechanism for emergent autocorrelations in networks of spiking neurons with implications for future developments in neuromorphic computing
Propagation of activity through the cortical hierarchy and perception are determined by neural variability
Brains are composed of anatomically and functionally distinct regions performing specialized tasks, but regions do not operate in isolation. Orchestration of complex behaviors requires communication between brain regions, but how neural dynamics are organized to facilitate reliable transmission is not well understood. Here we studied this process directly by generating neural activity that propagates between brain regions and drives behavior, assessing how neural populations in sensory cortex cooperate to transmit information. We achieved this by imaging two densely interconnected regions—the primary and secondary somatosensory cortex (S1 and S2)—in mice while performing two-photon photostimulation of S1 neurons and assigning behavioral salience to the photostimulation. We found that the probability of perception is determined not only by the strength of the photostimulation but also by the variability of S1 neural activity. Therefore, maximizing the signal-to-noise ratio of the stimulus representation in cortex relative to the noise or variability is critical to facilitate activity propagation and perception
- …
