1,130 research outputs found

    Neural Information Processing: between synchrony and chaos

    Get PDF
    The brain is characterized by performing many different processing tasks ranging from elaborate processes such as pattern recognition, memory or decision-making to more simple functionalities such as linear filtering in image processing. Understanding the mechanisms by which the brain is able to produce such a different range of cortical operations remains a fundamental problem in neuroscience. Some recent empirical and theoretical results support the notion that the brain is naturally poised between ordered and chaotic states. As the largest number of metastable states exists at a point near the transition, the brain therefore has access to a larger repertoire of behaviours. Consequently, it is of high interest to know which type of processing can be associated with both ordered and disordered states. Here we show an explanation of which processes are related to chaotic and synchronized states based on the study of in-silico implementation of biologically plausible neural systems. The measurements obtained reveal that synchronized cells (that can be understood as ordered states of the brain) are related to non-linear computations, while uncorrelated neural ensembles are excellent information transmission systems that are able to implement linear transformations (as the realization of convolution products) and to parallelize neural processes. From these results we propose a plausible meaning for Hebbian and non-Hebbian learning rules as those biophysical mechanisms by which the brain creates ordered or chaotic ensembles depending on the desired functionality. The measurements that we obtain from the hardware implementation of different neural systems endorse the fact that the brain is working with two different states, ordered and chaotic, with complementary functionalities that imply non-linear processing (synchronized states) and information transmission and convolution (chaotic states)

    Topological and Dynamical Complexity of Random Neural Networks

    Full text link
    Random neural networks are dynamical descriptions of randomly interconnected neural units. These show a phase transition to chaos as a disorder parameter is increased. The microscopic mechanisms underlying this phase transition are unknown, and similarly to spin-glasses, shall be fundamentally related to the behavior of the system. In this Letter we investigate the explosion of complexity arising near that phase transition. We show that the mean number of equilibria undergoes a sharp transition from one equilibrium to a very large number scaling exponentially with the dimension on the system. Near criticality, we compute the exponential rate of divergence, called topological complexity. Strikingly, we show that it behaves exactly as the maximal Lyapunov exponent, a classical measure of dynamical complexity. This relationship unravels a microscopic mechanism leading to chaos which we further demonstrate on a simpler class of disordered systems, suggesting a deep and underexplored link between topological and dynamical complexity

    Dynamical criticality in the collective activity of a population of retinal neurons

    Full text link
    Recent experimental results based on multi-electrode and imaging techniques have reinvigorated the idea that large neural networks operate near a critical point, between order and disorder. However, evidence for criticality has relied on the definition of arbitrary order parameters, or on models that do not address the dynamical nature of network activity. Here we introduce a novel approach to assess criticality that overcomes these limitations, while encompassing and generalizing previous criteria. We find a simple model to describe the global activity of large populations of ganglion cells in the rat retina, and show that their statistics are poised near a critical point. Taking into account the temporal dynamics of the activity greatly enhances the evidence for criticality, revealing it where previous methods would not. The approach is general and could be used in other biological networks

    Adaptation to criticality through organizational invariance in embodied agents

    Get PDF
    Many biological and cognitive systems do not operate deep within one or other regime of activity. Instead, they are poised at critical points located at phase transitions in their parameter space. The pervasiveness of criticality suggests that there may be general principles inducing this behaviour, yet there is no well-founded theory for understanding how criticality is generated at a wide span of levels and contexts. In order to explore how criticality might emerge from general adaptive mechanisms, we propose a simple learning rule that maintains an internal organizational structure from a specific family of systems at criticality. We implement the mechanism in artificial embodied agents controlled by a neural network maintaining a correlation structure randomly sampled from an Ising model at critical temperature. Agents are evaluated in two classical reinforcement learning scenarios: the Mountain Car and the Acrobot double pendulum. In both cases the neural controller appears to reach a point of criticality, which coincides with a transition point between two regimes of the agent's behaviour. These results suggest that adaptation to criticality could be used as a general adaptive mechanism in some circumstances, providing an alternative explanation for the pervasive presence of criticality in biological and cognitive systems.Comment: arXiv admin note: substantial text overlap with arXiv:1704.0525

    Emergent complex neural dynamics

    Full text link
    A large repertoire of spatiotemporal activity patterns in the brain is the basis for adaptive behaviour. Understanding the mechanism by which the brain's hundred billion neurons and hundred trillion synapses manage to produce such a range of cortical configurations in a flexible manner remains a fundamental problem in neuroscience. One plausible solution is the involvement of universal mechanisms of emergent complex phenomena evident in dynamical systems poised near a critical point of a second-order phase transition. We review recent theoretical and empirical results supporting the notion that the brain is naturally poised near criticality, as well as its implications for better understanding of the brain

    Can biological quantum networks solve NP-hard problems?

    Full text link
    There is a widespread view that the human brain is so complex that it cannot be efficiently simulated by universal Turing machines. During the last decades the question has therefore been raised whether we need to consider quantum effects to explain the imagined cognitive power of a conscious mind. This paper presents a personal view of several fields of philosophy and computational neurobiology in an attempt to suggest a realistic picture of how the brain might work as a basis for perception, consciousness and cognition. The purpose is to be able to identify and evaluate instances where quantum effects might play a significant role in cognitive processes. Not surprisingly, the conclusion is that quantum-enhanced cognition and intelligence are very unlikely to be found in biological brains. Quantum effects may certainly influence the functionality of various components and signalling pathways at the molecular level in the brain network, like ion ports, synapses, sensors, and enzymes. This might evidently influence the functionality of some nodes and perhaps even the overall intelligence of the brain network, but hardly give it any dramatically enhanced functionality. So, the conclusion is that biological quantum networks can only approximately solve small instances of NP-hard problems. On the other hand, artificial intelligence and machine learning implemented in complex dynamical systems based on genuine quantum networks can certainly be expected to show enhanced performance and quantum advantage compared with classical networks. Nevertheless, even quantum networks can only be expected to efficiently solve NP-hard problems approximately. In the end it is a question of precision - Nature is approximate.Comment: 38 page
    corecore